diff --git a/HPC/Antwerpen/Linux/search/search_index.json b/HPC/Antwerpen/Linux/search/search_index.json index 2c8b02f1aea..81e18231ee7 100644 --- a/HPC/Antwerpen/Linux/search/search_index.json +++ b/HPC/Antwerpen/Linux/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

Use the menu on the left to navigate, or use the search box on the top right.

You are viewing documentation intended for people using Linux.

Use the OS dropdown in the top bar to switch to a different operating system.

Quick links

If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

If you still have any questions, you can contact the UAntwerpen-HPC.

"}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

See also: Running batch jobs.

"}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

"}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

Modules each come with a suffix that describes the toolchain used to install them.

Examples:

Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

You can use module avail [search_text] to see which versions on which toolchains are available to use.

It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

"}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

When incompatible modules are loaded, you might encounter an error like this:

{{ lmod_error }}\n

You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

See also: How do I choose the job modules?

"}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

The 72 hour walltime limit will not be extended. However, you can work around this barrier:

"}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

Try requesting a bit more memory than your proportional share, and see if that solves the issue.

See also: Specifying memory requirements.

"}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

See also: Running interactive jobs.

"}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

"}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

There are a few possible causes why a job can perform worse than expected.

Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

"}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

See also: Multi core jobs/Parallel Computing and Mympirun.

"}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

For example, we have a simple script (./hello.sh):

#!/bin/bash \necho \"hello world\"\n

And we run it like mympirun ./hello.sh --output output.txt.

To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

mympirun --output output.txt ./hello.sh\n
"}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

"}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

When trying to create files, errors like this can occur:

No space left on device\n

The error \"No space left on device\" can mean two different things:

An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

If the problem persists, feel free to contact support.

"}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

"}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

$ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

For more information about chmod or setfacl, see Linux tutorial.

"}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

"}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

Please send an e-mail to hpc@uantwerpen.be that includes:

If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

"}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

MacOS & Linux (on Windows, only the second part is shown):

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

"}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

A Virtual Organisation consists of a number of members and moderators. A moderator can:

One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

See also: Virtual Organisations.

"}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

"}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

"}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

A lot of tasks can be performed without sudo, including installing software in your own account.

Installing software

"}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

Who can I contact?

"}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

"}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

"}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

module load hod\n
"}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

For example, this will work as expected:

$ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

"}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

$ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

"}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

You should occasionally clean this up using hod clean:

$ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

"}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

If you have any questions, or are experiencing problems using HOD, you have a couple of options:

"}, {"location": "MATLAB/", "title": "MATLAB", "text": "

Note

To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

"}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

"}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

$ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

First, we copy the magicsquare.m example that comes with MATLAB to example.m:

cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

To compile a MATLAB program, use mcc -mv:

mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
"}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

"}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

export _JAVA_OPTIONS=\"-Xmx64M\"\n

The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

Another possible issue is that the heap size is too small. This could result in errors like:

Error: Out of memory\n

A possible solution to this is by setting the maximum heap size to be bigger:

export _JAVA_OPTIONS=\"-Xmx512M\"\n
"}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

parpool.m
% specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

See also the parpool documentation.

"}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

MATLAB_LOG_DIR=<OUTPUT_DIR>\n

where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

# create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

You should remove the directory at the end of your job script:

rm -rf $MATLAB_LOG_DIR\n
"}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

"}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

jobscript.sh
#!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
"}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

"}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

$ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

"}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

You can get a list of running VNC servers on a node with

$ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

This only displays the running VNC servers on the login node you run the command on.

To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

$ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

"}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

"}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

So, in our running example, both the source and destination ports are 5906.

"}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

"}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

First, we will set up the SSH tunnel from our workstation to .

Use the settings specified in the sections above:

Execute the following command to set up the SSH tunnel.

ssh -L 5906:localhost:12345  vsc20167@login.hpc.uantwerpen.be\n

Replace the source port 5906, destination port 12345 and user ID vsc20167 with your own!

With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

"}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

netstat -an | grep -i listen | grep tcp | grep 12345\n

If you see no matching lines, then the port you picked is still available, and you can continue.

If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

$ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
"}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

To do this, run the following command:

$ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

**Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

"}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

Download and setup a VNC client. A good choice is tigervnc. You can start it with the vncviewer command.

Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

When prompted for a password, use the password you used to setup the VNC server.

When prompted for default or empty panel, choose default.

If you have an empty panel, you can reset your settings with the following commands:

xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
"}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

The VNC server can be killed by running

vncserver -kill :6\n

where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

"}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

"}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

See HPC policies for more information on who is entitled to an account.

The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

There are two methods for connecting to UAntwerpen-HPC:

The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

"}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "

Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). Launch a terminal from your desktop's application menu and you will see the bash shell. There are other shells, but most Linux distributions use bash by default.

"}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

\"Secure\" means that:

  1. the User is authenticated to the System; and

  2. the System is authenticated to the User; and

  3. all data is encrypted during transfer.

OpenSSH is a FREE implementation of the SSH connectivity protocol. Linux comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

$ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

To access the clusters and transfer your files, you will use the following commands:

  1. ssh-keygen: to generate the SSH key pair (public + private key);

  2. ssh: to open a shell on a remote machine;

  3. sftp: a secure equivalent of ftp;

  4. scp: a secure equivalent of the remote copy command rcp.

"}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

ls ~/.ssh\n

If a key-pair is already available, you would normally get:

authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

Otherwise, the command will show:

ls: .ssh: No such file or directory\n

You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

You will need to generate a new key pair, when:

  1. you don't have a key pair yet

  2. you forgot the passphrase protecting your private key

  3. your private key was compromised

  4. your key pair is too short or not the right type

For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

ssh-keygen -t rsa -b 4096\n

This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

Without your key pair, you won't be able to apply for a personal VSC account.

"}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

Most recent Unix derivatives include by default an SSH agent (\"gnome-keyring-daemon\" in most cases) to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

ssh-add\n

Tip

Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

Check that your key is available from the keyring with:

ssh-add -l\n

After these changes the key agent will keep your SSH key to connect to the clusters as usual.

Tip

You should execute ssh-add command again if you generate a new SSH key.

Visit https://wiki.gnome.org/Projects/GnomeKeyring/Ssh for more information.

"}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

Visit https://account.vscentrum.be/

You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

Click Confirm

You will now be taken to the authentication page of your institute.

The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

"}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

This file has been stored in the directory \"~/.ssh/\".

After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

"}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

Within one day, you should receive a Welcome e-mail with your VSC account details.

Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

"}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

  1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

  2. Go to https://account.vscentrum.be/django/account/edit

  3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

  4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

  5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

"}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

A typical Computation workflow will be:

  1. Connect to the UAntwerpen-HPC

  2. Transfer your files to the UAntwerpen-HPC

  3. Compile your code and test it

  4. Create a job script

  5. Submit your job

  6. Wait while

    1. your job gets into the queue

    2. your job gets executed

    3. your job finishes

  7. Move your results

We'll take you through the different tasks one by one in the following chapters.

"}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

"}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

"}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

$ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

To use AlphaFold, you should load a particular module, for example:

module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

Warning

When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

$ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

As of writing this documentation the latest version is 20230310.

Info

The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

"}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

Use newest version

Do not forget to replace 20230310 with a more up to date version if available.

"}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

AlphaFold provides a script called run_alphafold.py

A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

For more information about the script and options see this section in the official README.

READ README

It is strongly advised to read the official README provided by DeepMind before continuing.

"}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

Info

Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

"}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

Using --db_preset=full_dbs, the following runtime data was collected:

This highlights a couple of important attention points:

With --db_preset=casp14, it is clearly more demanding:

This highlights the difference between CPU and GPU performance even more.

"}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

Do not forget to set up the environment (see above: Setting up the environment).

"}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

>sequence_name\n<SEQUENCE>\n

Then run the following command in the same directory:

alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

See AlphaFold output, for information about the outputs.

Info

For more scenarios see the example section in the official README.

"}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

The following two example job scripts can be used as a starting point for running AlphaFold.

The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

To run the job scripts you need to create a file named T1050.fasta with the following content:

>T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

"}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

Swap to the joltik GPU before submitting it:

module swap cluster/joltik\n
AlphaFold-gpu-joltik.sh
#!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
"}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

Jobscript that runs AlphaFold on CPU using 24 cores on one node.

AlphaFold-cpu-doduo.sh
#!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

"}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

"}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

"}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

"}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

# avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
"}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

"}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

Create a job script like:

#!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

Create an example myscript.sh:

#!/bin/bash\n\n# prime factors\nfactor 1234567\n
"}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
#!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

You can download linear_regression.py from the official Tensorflow repository.

"}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

It is also possible to execute MPI jobs within a container, but the following requirements apply:

Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

For example to compile an MPI example:

module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

Example MPI job script:

#!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
"}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
  1. Before starting, you should always check:

  2. Check your computer requirements upfront, and request the correct resources in your batch job script.

  3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

  4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

  5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

  6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

  7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

  8. Submit your job and wait (be patient) ...

  9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

  10. The runtime is limited by the maximum walltime of the queues.

  11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

  12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

  13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

"}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

"}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

"}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

"}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

A typical process looks like:

  1. Copy your software to the login-node of the UAntwerpen-HPC

  2. Start an interactive session on a compute node;

  3. Compile it;

  4. Test it locally;

  5. Generate your job scripts;

  6. Test it on the UAntwerpen-HPC

  7. Run it (in parallel);

We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
"}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

We now list the directory and explore the contents of the \"hello.c\" program:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

We first need to compile this C-file into an executable with the gcc-compiler.

First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

$ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

$ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

It seems to work, now run it on the UAntwerpen-HPC

qsub hello.pbs\n

"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

List the directory and explore the contents of the \"mpihello.c\" program:

$ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

mpihello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

A new file \"hello\" has been created. Note that this program has \"execute\" rights.

Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n
"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

module purge\nmodule load intel\n

Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

mpiicc -o mpihello mpihello.c\nls -l\n

Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n

Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

  1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

  2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

  3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

  4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

"}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

ssh_exchange_identification: read: Connection reset by peer\n
"}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

"}, {"location": "connecting/#connect", "title": "Connect", "text": "

Open up a terminal and enter the following command to connect to the UAntwerpen-HPC.

ssh vsc20167@login.hpc.uantwerpen.be\n

Here, user vsc20167 wants to make a connection to the \"Leibniz\" cluster at University of Antwerp via the login node \"login.hpc.uantwerpen.be\", so replace vsc20167 with your own VSC id in the above command.

The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

In this case, use the -i option for the ssh command to specify the location of your private key. For example:

ssh -i /home/example/my_keys\n

Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

$ pwd\n/user/antwerpen/201/vsc20167\n

Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

$ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

$ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

This directory contains:

  1. This HPC Tutorial (in either a Mac, Linux or Windows version).

  2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

cd examples\n

Tip

Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

Tip

For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

You can exit the connection at anytime by entering:

$ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

tip: Setting your Language right

You may encounter a warning message similar to the following one during connecting:

perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
or any other error message complaining about the locale.

This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier. Open the .bashrc on your local machine with your favourite editor and add the following lines:

$ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

tip: vi

To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

"}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. Linux ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

"}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the UAntwerpen-HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

Open an additional terminal window and check that you're working on your local machine.

$ hostname\n<local-machine-name>\n

If you're still using the terminal that is connected to the UAntwerpen-HPC, close the connection by typing \"exit\" in the terminal window.

For example, we will copy the (local) file \"localfile.txt\" to your home directory on the UAntwerpen-HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc20167\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc20167@login.hpc.uantwerpen.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

$ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc20167@login.hpc.uantwerpen.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

Connect to the UAntwerpen-HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

$ pwd\n/user/antwerpen/201/vsc20167\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-Linux-Antwerpen.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

$ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc20167 Sep 11 09:53 intro-HPC-Linux-Antwerpen.pdf\n

Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

$ scp vsc20167@login.hpc.uantwerpen.be:./docs/intro-HPC-Linux-Antwerpen.pdf .\nintro-HPC-Linux-Antwerpen.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

The file has been copied from the HPC to your local computer.

It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

scp -r dataset vsc20167@login.hpc.uantwerpen.be:scratch\n

If you don't use the -r option to copy a directory, you will run into the following error:

$ scp dataset vsc20167@login.hpc.uantwerpen.be:scratch\ndataset: not a regular file\n
"}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

One easy way of starting a sftp session is

sftp vsc20167@login.hpc.uantwerpen.be\n

Typical and popular commands inside an sftp session are:

cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the UAntwerpen-HPC remote machine) ls Get a list of the files in the current directory on the UAntwerpen-HPC. get fibo.py Copy the file \"fibo.py\" from the UAntwerpen-HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the UAntwerpen-HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the UAntwerpen-HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the UAntwerpen-HPC."}, {"location": "connecting/#using-a-gui", "title": "Using a GUI", "text": "

If you prefer a GUI to transfer files back and forth to the UAntwerpen-HPC, you can use your file browser. Open your file browser and press Ctrl+l

This should open up a address bar where you can enter a URL. Alternatively, look for the \"connect to server\" option in your file browsers menu.

Enter: sftp://vsc20167@login.hpc.uantwerpen.be/ and press enter.

You should now be able to browse files on the UAntwerpen-HPC in your file browser.

"}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

See the section on rsync in chapter 5 of the Linux intro manual.

"}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

ssh ln2.leibniz.uantwerpen.vsc\n
This is also possible the other way around.

If you want to find out which login host you are connected to, you can use the hostname command.

$ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

"}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

Check if any cron script is already set in the current login node with:

crontab -l\n

At this point you can add/edit (with vi editor) any cron script running the command:

crontab -e\n
"}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
 15 5 * * * ~/runscript.sh >& ~/job.out\n

where runscript.sh has these lines in this example:

runscript.sh
#!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

ssh gligar07    # or gligar08\n
"}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

"}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

"}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

Before you use EasyBuild, you need to configure it:

"}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

This is where EasyBuild can find software sources:

EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
"}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

"}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

This is where EasyBuild will install the software (and accompanying modules) to.

For example, to let it use $VSC_DATA/easybuild, use:

export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

"}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

module load EasyBuild\n
"}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

$ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

eb example-1.2.1-foss-2024a.eb --robot\n
"}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

To try to install example v1.2.5 with a different compiler toolchain:

eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
"}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

"}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

module use $EASYBUILD_INSTALLPATH/modules/all\n

It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

"}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

This chapter shows you how to measure:

  1. Walltime
  2. Memory usage
  3. CPU usage
  4. Disk (storage) needs
  5. Network bottlenecks

First, we allocate a compute node and move to our relevant directory:

qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
"}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

Test the time command:

$ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

The walltime can be specified in a job scripts as:

#PBS -l walltime=3:00:00:00\n

or on the command line

qsub -l walltime=3:00:00:00\n

It is recommended to always specify the walltime for a job.

"}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

First compile the program on your machine and then test it for 1 GB:

$ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
"}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

$ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

"}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

$ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

$ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

Whereby:

  1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
  2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
  3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
  4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

Monitor will write the CPU usage and memory consumption of simulation to standard error.

This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

$ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

$ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

$ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

top

provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

htop

is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

"}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

Sequential or single-node applications:

The maximum amount of physical memory used by the job can be specified in a job script as:

#PBS -l mem=4gb\n

or on the command line

qsub -l mem=4gb\n

This setting is ignored if the number of nodes is not\u00a01.

Parallel or multi-node applications:

When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

#PBS -l pmem=2gb\n

or on the command line

$ qsub -l pmem=2gb\n

(and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

"}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

"}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

$ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

Or if you want to see it in a more readable format, execute:

$ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

Note

Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

In order to specify the number of nodes and the number of processors per node in your job script, use:

#PBS -l nodes=N:ppn=M\n

or with equivalent parameters on the command line

qsub -l nodes=N:ppn=M\n

This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

$ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

And then start to monitor the eat_cpu program:

$ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

We notice that it the program keeps its CPU nicely busy at 100%.

Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

This could also be monitored with the htop command:

htop\n
Example output:
  1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
  2. Develop your parallel program in a smart way.
  3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  4. Correct your request for CPUs in your job script.
"}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

The load averages differ from CPU percentage in two significant ways:

  1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
  2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
"}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

What is the \"optimal load\" rule of thumb?

The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

  1. When you are running computational intensive applications, one application per processor will generate the optimal load.
  2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

The uptime command will show us the average load

$ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

$ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
You can also read it in the htop command.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Profile your software to improve its performance.
  2. Configure your software (e.g., to exactly use the available amount of processors in a node).
  3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
  4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  5. Correct your request for CPUs in your job script.

And then check again.

"}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

Some programs generate intermediate or output files, the size of which may also be a useful metric.

Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

$ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

The monitor tool provides an option (-f) to display the size of one or more files:

$ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

Several actions can be taken, to avoid storage problems:

  1. Be aware of all the files that are generated by your program. Also check out the hidden files.
  2. Check your quota consumption regularly.
  3. Clean up your files regularly.
  4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
  5. Make sure your programs clean up their temporary files after execution.
  6. Move your output results to your own computer regularly.
  7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
"}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

The parameter to add in your job script would be:

#PBS -l ib\n

If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

#PBS -l gbe\n
"}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

$ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

"}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

"}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

$ monitor -p 18749\n

Note that this feature can be (ab)used to monitor specific sub-processes.

"}, {"location": "getting_started/", "title": "Getting Started", "text": "

Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

"}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

"}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
  1. Connect to the login nodes
  2. Transfer your files to the UAntwerpen-HPC
  3. Optional: compile your code and test it
  4. Create a job script and submit your job
  5. Wait for job to be executed
  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

"}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

There are two options to connect

Considering your operating system is Linux, it is recommended to make use of the ssh command in a terminal to get the most flexibility.

Assuming you have already generated SSH keys in the previous step (Getting Access), and that they are in a default location, you should now be able to login by running the following command:

ssh vsc20167@login.hpc.uantwerpen.be\n

User your own VSC account id

Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

Tip

You can also still use the web portal (see shell access on web portal)

Info

When having problems see the connection issues section on the troubleshooting page.

"}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

On your local machine you can run:

curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

scp tensorflow_mnist.py run.sh vsc20167login.hpc.uantwerpen.be:~\n

ssh  vsc20167@login.hpc.uantwerpen.be\n

User your own VSC account id

Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

Info

For more information about transfering files or scp, see tranfer files from/to hpc.

When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

$ ls ~\nrun.sh tensorflow_mnist.py\n

When you do not see these files, make sure you uploaded the files to your home directory.

"}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

Our job script looks like this:

run.sh

#!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
As you can see this job script will run the Python script named tensorflow_mnist.py.

The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

module swap cluster/{{ othercluster }}\n

Tip

When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

$ qsub run.sh\n433253.leibniz\n

This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

Make sure you understand what the module command does

Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

For detailed information about module commands, read the running batch jobs chapter.

"}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

You can get an overview of the active jobs using the qstat command:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

Eventually, after entering qstat again you should see that your job has started running:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

If you don't see your job in the output of the qstat command anymore, your job has likely completed.

Read this section on how to interpret the output.

"}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

When your job finishes it generates 2 output files:

By default located in the directory where you issued qsub.

In our example when running ls in the current directory you should see 2 new files:

Info

run.sh.e433253.leibniz should be empty (no errors or warnings).

Use your own job ID

Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

When examining the contents of run.sh.o433253.leibniz you will see something like this:

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

Warning

When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

"}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "

For more examples see Program examples and Job script examples

"}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

module swap cluster/joltik\n

To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

module swap cluster/accelgor\n

Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

"}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

"}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

See https://www.ugent.be/hpc/en/infrastructure.

"}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

There are 2 main ways to ask for GPUs as part of a job:

Some background:

"}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

Some important attention points:

"}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

Use module avail to check for centrally installed software.

The subsections below only cover a couple of installed software packages, more are available.

"}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

Please consult module avail GROMACS for a list of installed versions.

"}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

Please consult module avail Horovod for a list of installed versions.

Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

"}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

Please consult module avail PyTorch for a list of installed versions.

"}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

Please consult module avail TensorFlow for a list of installed versions.

Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

"}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
#!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
"}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

Please consult module avail AlphaFold for a list of installed versions.

For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

"}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

"}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

"}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

module swap cluster/donphan\n

Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

"}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

Some limits are in place for this cluster:

In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

"}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

"}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

\"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

"}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

The UAntwerpen-HPC consists of:

In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

The UAntwerpen-HPC currently consists of:

Leibniz:

  1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

  2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

  3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

  4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

  5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

  6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

  7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

Vaughan:

  1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

The nodes are connected using an InfiniBand HDR100 network.

All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

Two tools perform the Job management and job scheduling:

  1. TORQUE: a resource manager (based on PBS);

  2. Moab: job scheduler and management tools.

For maintenance and monitoring, we use:

  1. Ganglia: monitoring software;

  2. Icinga and Nagios: alert manager.

"}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

The HPC infrastructure is not a magic computer that automatically:

  1. runs your PC-applications much faster for bigger problems;

  2. develops your applications;

  3. solves your bugs;

  4. does your thinking;

  5. ...

  6. allows you to play games even faster.

The UAntwerpen-HPC does not replace your desktop computer.

"}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

"}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

"}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

The two parallel programming paradigms most used in HPC are:

Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

"}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

"}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

Supported and commonly used compilers are GCC, clang, J2EE and Intel

Commonly used software packages are:

Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

"}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

"}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

A typical workflow looks like:

  1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

  2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

  3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

  4. Create a job script and submit your job (see Running batch jobs)

  5. Get some coffee and be patient:

    1. Your job gets into the queue

    2. Your job gets executed

    3. Your job finishes

  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

"}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

Do not hesitate to contact the UAntwerpen-HPC staff for any help.

  1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

"}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

simple_jobscript.sh
#!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
"}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

Here's an example of a single-core job script:

single_core.sh
#!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
  1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

  2. A module for Python 3.6 is loaded, see also section Modules.

  3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

  4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

  5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

"}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

Here's an example of a multi-core job script that uses mympirun:

multi_core.sh
#!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

"}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

timeout.sh
#!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

example_program.sh
#!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
"}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

"}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

"}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

We can see all available versions of the SciPy module by using module avail SciPy-bundle:

$ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

$ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

"}, {"location": "known_issues/", "title": "Known issues", "text": "

This page provides details on a couple of known problems, and the workarounds that are available for them.

If you have any questions related to these issues, please contact the UAntwerpen-HPC.

"}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

This error means that an internal problem has occurred in OpenMPI.

"}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

"}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

"}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

A workaround as been implemented in mympirun (version 5.4.0).

Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

module load vsc-mympirun\n

and launch your MPI application using the mympirun command.

For more information, see the mympirun documentation.

"}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
"}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

"}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

There are two important motivations to engage in parallel programming.

  1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

  2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

Tip

You can request more nodes/cores by adding following line to your run script.

#PBS -l nodes=2:ppn=10\n
This queues a job that claims 2 nodes and 10 cores.

Warning

Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

"}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

Go to the example directory:

cd ~/examples/Multi-core-jobs-Parallel-Computing\n

Note

If the example directory is not yet present, copy it to your home directory:

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

Study the example first:

T_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

And compile it (whilst including the thread library) and run and test it on the login-node:

$ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Now, run it on the cluster and check the output:

$ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Tip

If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

"}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

Here is the general code structure of an OpenMP program:

#include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

"}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

"}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

omp1.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

Now run it in the cluster and check the result again.

$ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
"}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

omp2.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

omp3.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

There are a host of other directives you can issue using OpenMP.

Some other clauses of interest are:

  1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

  2. nowait: threads will not wait until everybody is finished

  3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

  4. if: allows you to parallelise only if a certain condition is met

  5. ...\u00a0and a host of others

Tip

If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

"}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

Study the MPI-programme and the PBS-file:

mpi_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
mpi_hello.pbs
#!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

and compile it:

$ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

Run the parallel program:

$ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

Tip

If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

"}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

The \"Worker framework\" has been developed to address this issue.

It can handle many small jobs determined by:

parameter variations

i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

job arrays

i.e., each individual job got a unique numeric identifier.

Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

"}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/par_sweep\n

Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

$ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

par_sweep/weather
#!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

A job script that would run this as a job for the first parameters (p01) would then look like:

par_sweep/weather_p01.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

To submit the job, the user would use:

 $ qsub weather_p01.pbs\n
However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

$ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

In order to make our PBS generic, the PBS file can be modified as follows:

par_sweep/weather.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

Note that:

  1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

  2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

  3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

$ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

Warning

When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

module swap env/slurm/donphan\n

instead of

module swap cluster/donphan\n
We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

"}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/job_array\n

As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

The following bash script would submit these jobs all one by one:

#!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

This, as said before, could be disturbing for the job scheduler.

Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

The details are

  1. a job is submitted for each number in the range;

  2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

  3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

The job could have been submitted using:

qsub -t 1-100 my_prog.pbs\n

The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

A typical job script for use with job arrays would look like this:

job_array/job_array.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

$ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

job_array/test_set
#!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

job_array/test_set.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

Note that

  1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

  2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

The job is now submitted as follows:

$ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

$ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
"}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

Often, an embarrassingly parallel computation can be abstracted to three simple steps:

  1. a preparation phase in which the data is split up into smaller, more manageable chunks;

  2. on these chunks, the same algorithm is applied independently (these are the work items); and

  3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

cd ~/examples/Multi-job-submission/map_reduce\n

The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

First study the scripts:

map_reduce/pre.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
map_reduce/post.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

Then one can submit a MapReduce style job as follows:

$ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

"}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

The \"Worker Framework\" will be effective when

  1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

  2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

"}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

tail -f run.pbs.log433253.leibniz\n

Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

watch -n 60 wsummarize run.pbs.log433253.leibniz\n

This will summarise the log file every 60 seconds.

"}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

"}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

wresume -jobid 433253.leibniz\n

This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

wresume -jobid 433253.leibniz -retry\n

By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

"}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

$ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
"}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

to check for the available versions of worker, use the following command:

$ module avail worker\n
  1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

"}, {"location": "mympirun/", "title": "Mympirun", "text": "

mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

"}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

Before using mympirun, we first need to load its module:

module load vsc-mympirun\n

As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

"}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

"}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

This is the most commonly used option for controlling the number of processing.

The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

$ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
"}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

See vsc-mympirun README for a detailed explanation of these options.

"}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

$ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
"}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

"}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

"}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

Other useful OpenFOAM documentation:

"}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

"}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

$ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

module load OpenFOAM/11-foss-2023a\n
"}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

source $FOAM_BASH\n
"}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

"}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

unset $FOAM_SIGFPE\n

Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

"}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

After running the simulation, some post-processing steps are typically performed:

Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

"}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

"}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

"}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

See Basic usage for how to get started with mympirun.

To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

"}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

number of processor directories = 4 is not equal to the number of processors = 16\n

In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

See Controlling number of processes to control the number of process mympirun will start.

This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

To visualise the processor domains, use the following command:

mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

and then load the VTK files generated in the VTK folder into ParaView.

"}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

"}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

See https://cfd.direct/openfoam/user-guide/compiling-applications/.

"}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

OpenFOAM_damBreak.sh
#!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
"}, {"location": "program_examples/", "title": "Program examples", "text": "

If you have not done so already copy our examples to your home directory by running the following command:

 cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

~(tilde) refers to your home directory, the directory you arrive by default when you login.

Go to our examples:

cd ~/examples/Program-examples\n

Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

  1. 01_Python

  2. 02_C_C++

  3. 03_Matlab

  4. 04_MPI_C

  5. 05a_OMP_C

  6. 05b_OMP_FORTRAN

  7. 06_NWChem

  8. 07_Wien2k

  9. 08_Gaussian

  10. 09_Fortran

  11. 10_PQS

The above 2 OMP directories contain the following examples:

C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

Compile by any of the following commands:

Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

Be invited to explore the examples.

"}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

Remember to substitute the usernames, login nodes, file names, ...for your own.

Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

"}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

"}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

$ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

Initially there will be only one RHEL 9 login node. As needed a second one will be added.

When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

"}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

This includes (per user):

For more intensive tasks you can use the interactive and debug clusters through the web portal.

"}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

However, there will be impact on the availability of software that is made available via modules.

Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

This includes all software installations on top of a compiler toolchain that is older than:

(or another toolchain with a year-based version older than 2023a)

The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

"}, {"location": "rhel9/#planning", "title": "Planning", "text": "

We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

We will keep this page up to date when more specific dates have been planned.

Warning

This planning below is subject to change, some clusters may get migrated later than originally planned.

Please check back regularly.

"}, {"location": "rhel9/#questions", "title": "Questions", "text": "

If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

"}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

"}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

"}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

"}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

In order to administer the active software and their environment variables, the module system has been developed, which:

  1. Activates or deactivates software packages and their dependencies.

  2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

  3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

  4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

  5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

This is all managed with the module command, which is explained in the next sections.

There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

"}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

module available\n

It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

This will give some output such as:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

This gives a full list of software packages that can be loaded.

The casing of module names is important: lowercase and uppercase letters matter in module names.

"}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

E.g., foss/2024a is the first version of the foss toolchain in 2024.

The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

"}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

To \"activate\" a software package, you load the corresponding module file using the module load command:

module load example\n

This will load the most recent version of example.

For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

**However, you should specify a particular version to avoid surprises when newer versions are installed:

module load secondexample/2.7-intel-2016b\n

The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

Modules need not be loaded one by one; the two module load commands can be combined as follows:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

"}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

$ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

You can also just use the ml command without arguments to list loaded modules.

It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

"}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

$ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

To unload the secondexample module, you can also use ml -secondexample.

Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

"}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

module purge\n

However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

"}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

module load example\n

rather than

module load example/1.2.3\n

Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

Consider the following example modules:

$ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

Let's now generate a version conflict with the example module, and see what happens.

$ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

Note: A module swap command combines the appropriate module unload and module load commands.

"}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

With the module spider command, you can search for modules:

$ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

It's also possible to get detailed information about a specific module:

$ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
"}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

To get a list of all possible commands, type:

module help\n

Or to get more information about one specific module package:

$ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
"}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

In each module command shown below, you can replace module with ml.

First, load all modules you want to include in the collections:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

Now store it in a collection using module save. In this example, the collection is named my-collection.

module save my-collection\n

Later, for example in a jobscript or a new session, you can load all these modules with module restore:

module restore my-collection\n

You can get a list of all your saved collections with the module savelist command:

$ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

To get a list of all modules a collection will load, you can use the module describe command:

$ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

To remove a collection, remove the corresponding file in $HOME/.lmod.d:

rm $HOME/.lmod.d/my-collection\n
"}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

To see how a module would change the environment, you can use the module show command:

$ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

It's also possible to use the ml show command instead: they are equivalent.

Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

"}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

To check how much jobs are running in what queues, you can use the qstat -q command:

$ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

"}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

You can also get this information in text form (per cluster separately) with the pbsmon command:

$ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

"}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

First go to the directory with the first examples by entering the command:

cd ~/examples/Running-batch-jobs\n

Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

List and check the contents with:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

  1. The Perl script calculates the first 30 Fibonacci numbers.

  2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

On the command line, you would run this using:

$ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

The job script contains a description of the job by specifying the command that need to be executed on the compute node:

fibo.pbs
#!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

$ qsub fibo.pbs\n433253.leibniz\n

The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

Your job is now waiting in the queue for a free workernode to start on.

Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

After your job was started, and ended, check the contents of the directory:

$ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

Explore the contents of the 2 new files:

$ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

"}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

"}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

qstat 12345\n

To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

::: prompt :::

This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

::: prompt :::

To show on which compute nodes your job is running, at least, when it is running:

qstat -n 12345\n

To remove a job from the queue so that it will not run, or to stop a job that is already running.

qdel 12345\n

When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

$ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

Here:

Job ID the job's unique identifier

Name the name of the job

User the user that owns the job

Time Use the elapsed walltime for the job

Queue the queue the job is in

The state S can be any of the following:

State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

"}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

And to get the full detail of all the jobs, which are in the system:

::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

135 eligible jobs

blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

There are 3 categories, the , and jobs.

Active jobs

are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

Started

attempting to start, performing pre-start tasks

Running

currently executing the user application

Suspended

has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

Cancelling

has been cancelled, in process of cleaning up

Eligible jobs

are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

Blocked jobs

are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

Idle

when the job violates a fairness policy

Userhold

or systemhold when it is user or administrative hold

Batchhold

when the requested resources are not available or the resource manager has repeatedly failed to start the job

Deferred

when a temporary hold when the job has been unable to start after a specified number of attempts

Notqueued

when scheduling daemon is unavailable

"}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

"}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

qsub -l walltime=2:30:00 ...\n

For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

qsub -l mem=4gb ...\n

The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

qsub -l nodes=5:ppn=2 ...\n

The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

qsub -l nodes=1:westmere\n

The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

These options can either be specified on the command line, e.g.

qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

Note that the resources requested on the command line will override those specified in the PBS file.

"}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

To get a list of all properties defined for all nodes, enter

::: prompt :::

This list will also contain properties referring to, e.g., network components, rack number, etc.

"}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

When you navigate to that directory and list its contents, you should see them:

$ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

Inspect the generated output and error files:

$ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
"}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

::: prompt :::

Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

"}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

#PBS -m b \n#PBS -m e \n#PBS -m a\n

or

#PBS -m abe\n

These options can also be specified on the command line. Try it and see what happens:

qsub -m abe fibo.pbs\n

The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

qsub -m b -M john.smith@example.com fibo.pbs\n

will send an e-mail to john.smith@example.com when the job begins.

"}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

So the following example might go wrong:

$ qsub job1.sh\n$ qsub job2.sh\n

You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

$ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

afterok means \"After OK\", or in other words, after the first job successfully completed.

It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

  1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

"}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

The syntax for qsub for submitting an interactive PBS job is:

$ qsub -I <... pbs directives ...>\n
"}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

Tip

Find the code in \"~/examples/Running_interactive_jobs\"

First of all, in order to know on which computer you're working, enter:

$ hostname -f\nln2.leibniz.uantwerpen.vsc\n

This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

The most basic way to start an interactive job is the following:

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

There are two things of note here.

  1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

  2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

In order to know on which compute-node you're working, enter again:

$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

The computer \"r1c02cn3\" stands for:

  1. \"r5\" is rack #5.

  2. \"c3\" is enclosure/chassis #3.

  3. \"cn08\" is compute node #08.

With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

$ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

You can exit the interactive session with:

$ exit\n

Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

You can work for 3 hours by:

qsub -I -l walltime=03:00:00\n

If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

"}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

An X Window server is packaged by default on most Linux distributions. If you have a graphical user interface this generally means that you are using an X Window server.

The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

"}, {"location": "running_interactive_jobs/#connect-with-x-forwarding", "title": "Connect with X-forwarding", "text": "

In order to get the graphical output of your application (which is running on a compute node on the UAntwerpen-HPC) transferred to your personal screen, you will need to reconnect to the UAntwerpen-HPC with X-forwarding enabled, which is done with the \"-X\" option.

First exit and reconnect to the UAntwerpen-HPC with X-forwarding enabled:

$ exit\n$ ssh -X vsc20167@login.hpc.uantwerpen.be\n$ hostname -f\nln2.leibniz.uantwerpen.vsc\n

We first check whether our GUIs on the login node are decently forwarded to your screen on your local machine. An easy way to test it is by running a small X-application on the login node. Type:

$ xclock\n

And you should see a clock appearing on your screen.

You can close your clock and connect further to a compute node with again your X-forwarding enabled:

$ qsub -I -X\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n$ xclock\n

and you should see your clock again.

"}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

Now run the message program:

cd ~/examples/Running_interactive_jobs\n./message.py\n

You should see the following message appearing.

Click any button and see what happens.

-----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
"}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

::: prompt :::

And start the MATLAB interactive environment:

::: prompt :::

And start the fibo2.m program in the command window:

::: prompt fx >> :::

::: center :::

And see the displayed calculations, ...

::: center :::

as well as the nice \"plot\" appearing:

::: center :::

You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

::: prompt fx >> :::

"}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

"}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

First go to the directory:

cd ~/examples/Running_jobs_with_input_output_data\n

Note

If the example directory is not yet present, copy it to your home directory:

```

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

List and check the contents with:

$ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

file1.py
#!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

The code of the Python script, is self explanatory:

  1. In step 1, we write something to the file hello.txt in the current directory.

  2. In step 2, we write some text to stdout.

  3. In step 3, we write to stderr.

Check the contents of the first job script:

file1a.pbs
#!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

Submit it:

qsub file1a.pbs\n

After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

$ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

Some observations:

  1. The file Hello.txt was created in the current directory.

  2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

  3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

Inspect their contents ...\u00a0and remove the files

$ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

Tip

Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

"}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

Check the contents of the job script and execute it.

file1b.pbs
#!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

Inspect the contents again ...\u00a0and remove the generated files:

$ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

"}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

file1c.pbs
#!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
"}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

"}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

The following locations are available:

Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

We elaborate more on the specific function of these locations in the following sections.

"}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

The operating system also creates a few files and folders here to manage your account. Examples are:

File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

"}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

"}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

Each type of scratch has its own use:

Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

"}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

::: prompt ---------------------------------------------------------- Your quota is:

Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

:::

Make sure to regularly check these numbers at log-in!

The rules are:

  1. You will only receive a warning when you have reached the soft limit of either quota.

  2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

  3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

"}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

Tip

Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

  1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

  2. repeat this action 30.000 times;

  3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

$ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
"}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

Tip

Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

In this exercise, you will

  1. Generate the file \"primes_1.txt\" again as in the previous exercise;

  2. open the file;

  3. read it line by line;

  4. calculate the average of primes in the line;

  5. count the number of primes found per line;

  6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

Check the Python and the PBS file, and submit the job:

$ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
"}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

that can be made available to each individual UAntwerpen-HPC user.

The quota of disk space and number of files for each UAntwerpen-HPC user is:

HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

Tip

The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

Tip

"}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

or on the UAntwerp clusters

$ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

$ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

$ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

$ du -s\n5632 .\n$ du -s -h\n

If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

$ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

$ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

Try:

$ tree -s -d\n

However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

"}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

To change the group of a directory and it's underlying directories and files, you can use:

chgrp -R groupname directory\n
"}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
  1. Get the group name you want to belong to.

  2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

  3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

"}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
  1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

  2. Fill out the group name. This cannot contain spaces.

  3. Put a description of your group in the \"Info\" field.

  4. You will now be a member and moderator of your newly created group.

"}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

"}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

$ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

"}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

"}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

This section will explain how to create, activate, use and deactivate Python virtual environments.

"}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

A Python virtual environment can be created with the following command:

python -m venv myenv      # Create a new virtual environment named 'myenv'\n

This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

Warning

When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

"}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

source myenv/bin/activate                    # Activate the virtual environment\n
"}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

After activating the virtual environment, you can install additional Python packages with pip install:

pip install example_package1\npip install example_package2\n

These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

It is now possible to run Python scripts that use the installed packages in the virtual environment.

Tip

When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

To check if a package is available as a module, use:

module av package_name\n

Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

module show module_name\n

to check which extensions are included in a module (if any).

"}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

example.py
import example_package1\nimport example_package2\n...\n
python example.py\n
"}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

When you are done using the virtual environment, you can deactivate it. To do that, run:

deactivate\n
"}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

pytorch_poutyne.py
import torch\nimport poutyne\n\n...\n

We load a PyTorch package as a module and install Poutyne in a virtual environment:

module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

While the virtual environment is activated, we can run the script without any issues:

python pytorch_poutyne.py\n

Deactivate the virtual environment when you are done:

deactivate\n
"}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

module swap cluster/donphan\nqsub -I\n

After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

Naming a virtual environment

When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
"}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

This section will combine the concepts discussed in the previous sections to:

  1. Create a virtual environment on a specific cluster.
  2. Combine packages installed in the virtual environment with modules.
  3. Submit a job script that uses the virtual environment.

The example script that we will run is the following:

pytorch_poutyne.py
import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

First, we create a virtual environment on the donphan cluster:

module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

jobscript.pbs
#!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

Next, we submit the job script:

qsub jobscript.pbs\n

Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

"}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

For example, if we create a virtual environment on the skitty cluster,

module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

return to the login node by pressing CTRL+D and try to use the virtual environment:

$ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

we are presented with the illegal instruction error. More info on this here

"}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

"}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

There are two main reasons why this error could occur.

  1. You have not loaded the Python module that was used to create the virtual environment.
  2. You loaded or unloaded modules while the virtual environment was activated.
"}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

The following commands illustrate this issue:

$ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
"}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

$ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

The solution is to only modify modules when not in a virtual environment.

"}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

This documentation only covers aspects of using Singularity on the infrastructure.

"}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

If these limitations are a problem for you, please let us know via .

"}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

"}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

When you make Singularity or convert Docker images you have some restrictions:

"}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

"}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

::: prompt :::

Create a job script like:

Create an example myscript.sh:

::: code bash #!/bin/bash

# prime factors factor 1234567 :::

"}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

::: prompt :::

You can download linear_regression.py from the official Tensorflow repository.

"}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

It is also possible to execute MPI jobs within a container, but the following requirements apply:

Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

::: prompt :::

For example to compile an MPI example:

::: prompt :::

Example MPI job script:

"}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

Please make these requests well in advance, several weeks before the start of your course/workshop.

"}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

The title of the course or training can be used in e.g. reporting.

The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

When choosing the nickname, try to make it unique, but this is not enforced nor checked.

"}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

"}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

"}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

The management of the list of students or participants depends if this is a UGent course or a training/workshop.

"}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

"}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

(Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

"}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

Every course directory will always contain the folders:

"}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

Optionally, we can also create these folders:

If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

"}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

"}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

"}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

"}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

"}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

If you would like this for your course, provide more details in your teaching request, including:

We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

"}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

"}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

"}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

"}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

For example:

$ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

"}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

See also the examples below.

"}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

See also the examples below.

"}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

example.sh:

#/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
"}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

Running the following command:

$ qsub --dryrun example.sh -N example\n

will generate this output:

Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

"}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

Similarly to the --dryrun example, we start by running the following command:

$ qsub --debug example.sh -N example\n

which generates this output:

DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

"}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

Below is a list of the most common and useful directives.

Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

TORQUE-related environment variables in batch job scripts.

# Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

"}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

Other reasons why using more cores may not lead to a (significant) speedup include:

More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

"}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

An example of how you can make beneficial use of multiple nodes can be found here.

You can also use MPI in Python, some useful packages that are also available on the HPC are:

We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

"}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

"}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

If you get from your job output an error message similar to this:

=>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

"}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

"}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

If you have errors that look like:

vsc20167@login.hpc.uantwerpen.be: Permission denied\n

or you are experiencing problems with connecting, here is a list of things to do that should help:

  1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

  2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

  3. Your SSH private key may not be in the default location ($HOME/.ssh/id_rsa). There are several ways to deal with this (using one of these is sufficient):

    1. Use the ssh -i (see section Connect) OR;
    2. Use ssh-add (see section Using an SSH agent) OR;
    3. Specify the location of the key in $HOME/.ssh/config. You will need to replace the VSC login id in the User field with your own:
      Host Leibniz\n    Hostname login.hpc.uantwerpen.be\n    IdentityFile /path/to/private/key\n    User vsc20167\n
      Now you can connect with ssh Leibniz.
  4. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

  5. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

  6. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

  7. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

  8. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

Please add -vvv as a flag to ssh like:

ssh -vvv vsc20167@login.hpc.uantwerpen.be\n

and include the output of that command in the message.

"}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \n@     WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! \nSomeone could be\neavesdropping on you right now (man-in-the-middle attack)! \nIt is also possible that a host key has just been changed. \nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:1MNKFTfl1T9sm6tTWAo4sn7zyEfiWFLKbk/mlT+7S5s. \nPlease contact your system administrator. \nAdd correct host key in \u00a0~/.ssh/known_hosts to get rid of this message. \nOffending ECDSA key in \u00a0~/.ssh/known_hosts:21\nECDSA host key for login.hpc.uantwerpen.be has changed and you have requested strict checking.\nHost key verification failed.\n

You will need to remove the line it's complaining about (in the example, line 21). To do that, open ~/.ssh/known_hosts in an editor, and remove the line. This results in ssh \"forgetting\" the system you are connecting to.

Alternatively you can use the command that might be shown by the warning under remove with: and it should be something like this:

ssh-keygen -f \"~/.ssh/known_hosts\" -R \"login.hpc.uantwerpen.be\"\n

If the command is not shown, take the file from the \"Offending ECDSA key in\", and the host name from \"ECDSA host key for\" lines.

After you've done that, you'll need to connect to the UAntwerpen-HPC again. See Warning message when first connecting to new host to verify the fingerprints.

"}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

If you get errors like:

$ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

or

sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

"}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "
$ ssh vsc20167@login.hpc.uantwerpen.be\nThe authenticity of host login.hpc.uantwerpen.be (<IP-adress>) can't be established. \n<algorithm> key fingerprint is <hash>\nAre you sure you want to continue connecting (yes/no)?\n

Now you can check the authenticity by checking if the line that is at the place of the underlined piece of text matches one of the following lines:

{{ opensshFirstConnect }}\n

If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

"}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

Note

Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

"}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

"}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

"}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

vsc20167@ln01[203] $\n

When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

$ echo This is a test\nThis is a test\n

Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

$ ls --help \n$ man ls\n$ info ls\n

(You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

"}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

echo \"Hello! This is my hostname:\" \nhostname\n

You can type both lines at your shell prompt, and the result will be the following:

$ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

$ vi foo\n

or use the following commands:

echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

$ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

Congratulations, you just created and started your first shell script!

A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

$ which bash\n/bin/bash\n

We edit our script and change it with this information:

#!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

chmod +x foo\n

Now you can start your script by simply executing it:

$ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

The same technique can be used for all other scripting languages, like Perl and Python.

Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

"}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

Through this web portal, you can:

More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

"}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

"}, {"location": "web_portal/#login", "title": "Login", "text": "

When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

"}, {"location": "web_portal/#first-login", "title": "First login", "text": "

The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

Please click \"Authorize\" here.

This request will only be made once, you should not see this again afterwards.

"}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

Once logged in, you should see this start page:

This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

"}, {"location": "web_portal/#features", "title": "Features", "text": "

We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

"}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

Here you can:

For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

"}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

"}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

A new browser tab will be opened that shows all your current queued and/or running jobs:

You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

Jobs that are still queued or running can be deleted using the red button on the right.

Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

"}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

Don't forget to actually submit your job to the system via the green Submit button!

"}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

"}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

To exit the shell session, type exit followed by Enter and then close the browser tab.

Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

"}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

"}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

See dedicated page on Jupyter notebooks

"}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

In case of problems with the web portal, it could help to restart the web server running in your VSC account.

You can do this via the Restart Web Server button under the Help menu item:

Of course, this only affects your own web portal session (not those of others).

"}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": ""}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

  1. A graphical remote desktop that works well over low bandwidth connections.

  2. Copy/paste support from client to server and vice-versa.

  3. File sharing from client to server.

  4. Support for sound.

  5. Printer sharing from client to server.

  6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

"}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

"}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

There are two ways to connect to the login node:

"}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

This is the easier way to setup X2Go, a direct connection to the login node.

  1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

  2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

  3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

  4. Set the SSH port (22 by default).

  5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

    1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

  6. Check \"Try autologin\" option.

  7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

    1. [optional]: Set a single application like Terminal instead of XFCE desktop.

  8. [optional]: Change the session icon.

  9. Click the OK button after these changes.

"}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

  1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

  2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

  3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

    1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

    2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

    3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

    4. Click the OK button after these changes.

"}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

X2Go will keep the session open for you (but only if the login node is not rebooted).

"}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

hostname\n

This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

"}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

"}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

"}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

Loads MNIST datasets and trains a neural network to recognize hand-written digits.

Runtime: ~1 min. on 8 cores (Intel Skylake)

See https://www.tensorflow.org/tutorials/quickstart/beginner

"}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

The guide aims to make you familiar with the Linux command line environment quickly.

The tutorial goes through the following steps:

  1. Getting Started
  2. Navigating
  3. Manipulating files and directories
  4. Uploading files
  5. Beyond the basics

Do not forget Common pitfalls, as this can save you some troubleshooting.

"}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

"}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

First, it's important to make a distinction between two different output channels:

  1. stdout: standard output channel, for regular output

  2. stderr: standard error channel, for errors and warnings

"}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

> writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

$ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

>> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

$ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

"}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

< reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

$ find . -name .txt > files\n$ xargs grep banana < files\n

"}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

To redirect the stderr output (warnings, messages), you can use 2>, just like >

$ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

"}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

$ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

"}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

$ ls | wc -l\n    42\n

A common pattern is to pipe the output of a command to less so you can examine or search the output:

$ find . | less\n

Or to look through your command history:

$ history | less\n

You can put multiple pipes in the same line. For example, which cp commands have we run?

$ history | grep cp | less\n

"}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

The shell will expand certain things, including:

  1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

  2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

  3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

  4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

"}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

$ ps -fu $USER\n

To see all the processes:

$ ps -elf\n

To see all the processes in a forest view, use:

$ ps auxf\n

The last two will spit out a lot of data, so get in the habit of piping it to less.

pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

"}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

$ kill 1234\n$ kill $(pgrep misbehaving_process)\n

Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

$ kill -9 1234\n

"}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

To exit top, use q (for 'quit').

For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

"}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

$ ulimit -a\n

"}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

$ wc example.txt\n      90     468     3189   example.txt\n

The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

To only count the number of lines, use wc -l:

$ wc -l example.txt\n      90    example.txt\n

"}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

$ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

"}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

$ cut -f 1 -d ',' mydata.csv\n

"}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

$ sed 's/oldtext/newtext/g' myfile.txt\n

By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

"}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

$ awk '{print $4}' mydata.dat\n

You can use -F ':' to change the delimiter (F for field separator).

The next example is used to sum numbers from a field:

$ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

"}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

However, there are some rules you need to abide by.

Here is a very detailed guide should you need more information.

"}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

#!/bin/sh\n
#!/bin/bash\n
#!/usr/bin/env bash\n

"}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
Or only if a certain variable is bigger than one:
if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

In the initial example, we used -d to test if a directory existed. There are several more checks.

Another useful example, is to test if a variable contains a value (so it's not empty):

if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

the -z will check if the length of the variable's value is greater than zero.

"}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

Let's look at a simple example:

for i in 1 2 3\ndo\necho $i\ndone\n

"}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

"}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

Firstly a useful thing to know for debugging and testing is that you can run any command like this:

command 2>&1 output.log   # one single output file, both output and errors\n

If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

If you want regular and error output separated you can use:

command > output.log 2> output.err  # errors in a separate file\n

this will write regular output to output.log and error output to output.err.

You can then look for the errors with less or search for specific text with grep.

In scripts, you can use:

set -e\n

This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

"}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

"}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

Examples include:

Some recommendations:

"}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

"}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
#!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
"}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

#PBS -l nodes=1:ppn=1 # single-core\n

For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

#PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

We intend to submit it on the long queue:

#PBS -q long\n

We request a total running time of 48 hours (2 days).

#PBS -l walltime=48:00:00\n

We specify a desired name of our job:

#PBS -N FreeSurfer_per_subject-time-longitudinal\n
This specifies mail options:
#PBS -m abe\n

  1. a means mail is sent when the job is aborted.

  2. b means mail is sent when the job begins.

  3. e means mail is sent when the job ends.

Joins error output with regular output:

#PBS -j oe\n

All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

"}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
  1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

  2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

  3. How many files and directories are in /tmp?

  4. What's the name of the 5th file/directory in alphabetical order in /tmp?

  5. List all files that start with t in /tmp.

  6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

  7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

"}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

"}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

If you receive an error message which contains something like the following:

No such file or directory\n

It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

"}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

$ cat some file\nNo such file or directory 'some'\n

Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

$ cat some\\ file\n...\n$ cat \"some file\"\n...\n

This is especially error-prone if you are piping results of find:

$ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

This can be worked around using the -print0 flag:

$ find . -type f -print0 | xargs -0 cat\n...\n

But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

"}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

$ rm -r ~/$PROJETC/*\n

"}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

$ #rm -r ~/$POROJETC/*\n
Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

"}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
$ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

$ chmod +x script_name.sh\n

"}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

If you need help about a certain command, you should consult its so-called \"man page\":

$ man command\n

This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

"}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
  1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

  2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

  3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

  4. basic shell usage

  5. Bash for beginners

  6. MOOC

Please don't hesitate to contact in case of questions or problems.

"}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

"}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

To get help:

  1. use the documentation available on the system, through the help, info and man commands (use q to exit).
    help cd \ninfo ls \nman cp \n
  2. use Google

  3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

"}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

"}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

You use the shell by executing commands, and hitting <enter>. For example:

$ echo hello \nhello \n

You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

To go through previous commands, use <up> and <down>, rather than retyping them.

"}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

$ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

"}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

"}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

At the prompt we also have access to shell variables, which have both a name and a value.

They can be thought of as placeholders for things we need to remember.

For example, to print the path to your home directory, we can use the shell variable named HOME:

$ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

This prints the value of this variable.

"}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

$ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

$ env | sort | grep VSC\n

But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

$ export MYVARIABLE=\"value\"\n

It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

If we then do

$ echo $MYVARIABLE\n

this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

"}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

You can change what your prompt looks like by redefining the special-purpose variable $PS1.

For example: to include the current location in your prompt:

$ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

Note that ~ is short representation of your home directory.

To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

$ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

"}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

This may lead to surprising results, for example:

$ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

To understand what's going on here, see the section on cd below.

The moral here is: be very careful to not use empty variables unintentionally.

Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

The -e option will result in the script getting stopped if any command fails.

The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

"}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

"}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

Basic information about the system you are logged into can be obtained in a variety of ways.

We limit ourselves to determining the hostname:

$ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

And querying some basic information about the Linux kernel:

$ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

"}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "

Next chapter teaches you on how to navigate.

"}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

"}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

To figure out where your quota is being spent, the du (isk sage) command can come in useful:

$ du -sh test\n59M test\n

Do not (frequently) run du on directories where large amounts of data are stored, since that will:

  1. take a long time

  2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

"}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

Software is provided through so-called environment modules.

The most commonly used commands are:

  1. module avail: show all available modules

  2. module avail <software name>: show available modules for a specific software name

  3. module list: show list of loaded modules

  4. module load <module name>: load a particular module

More information is available in section Modules.

"}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

Detailed information is available in section submitting your job.

"}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

Hint: python -c \"print(sum(range(1, 101)))\"

"}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

$ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
$ cp source target\n

This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

$ cp -r sourceDirectory target\n

A last more complicated example:

$ cp -a sourceDirectory target\n

Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
$ mkdir directory\n

which will create a directory with the given name inside the current directory.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
$ mv source target\n

mv will move the source path to the destination path. Works for both directories as files.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

$ rm filename\n
rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

$ rmdir directory\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

  1. User - a particular user (account)

  2. Group - a particular group of users (may be user-specific group with only one member)

  3. Other - other users in the system

The permission types are:

  1. Read - For files, this gives permission to read the contents of a file

  2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

  3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

$ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

$ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

This will give the user otheruser permissions to write to Project_GoldenDragon

Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

See https://linux.die.net/man/1/setfacl for more information.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

$ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

$ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

$ unzip myfile.zip\n

If we would like to make our own zip archive, we use zip:

$ zip myfiles.zip myfile1 myfile2 myfile3\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

$ tar -xf tarfile.tar\n

Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

$ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

# cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

$ tar -c source1 source2 source3 -f tarfile.tar\n
"}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
  1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

  2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

  3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

  4. Remove the another/test directory with a single command.

  5. Rename test to test2. Move test2/hostname.txt to your home directory.

  6. Change the permission of test2 so only you can access it.

  7. Create an empty job script named job.sh, and make it executable.

  8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

The next chapter is on uploading files, especially important when using HPC-infrastructure.

"}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

"}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

To print the current directory, use pwd or \\$PWD:

$ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

"}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

A very basic and commonly used command is ls, which can be used to list files and directories.

In its basic usage, it just prints the names of files and directories in the current directory. For example:

$ ls\nafile.txt some_directory \n

When provided an argument, it can be used to list the contents of a directory:

$ ls some_directory \none.txt two.txt\n

A couple of commonly used options include:

If you try to use ls on a file that doesn't exist, you will get a clear error message:

$ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
"}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

To change to a different directory, you can use the cd command:

$ cd some_directory\n

To change back to the previous directory you were in, there's a shortcut: cd -

Using cd without an argument results in returning back to your home directory:

$ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

"}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

The file command can be used to inspect what type of file you're dealing with:

$ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
"}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

There are two special relative paths worth mentioning:

You can also use .. when constructing relative paths, for example:

$ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
"}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

Each file and directory has particular permissions set on it, which can be queried using ls -l.

For example:

$ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

  1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
  2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
  3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
  4. the 3rd part r-- indicates that other users only have read permissions

The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

  1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
  2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

See also the chmod command later in this manual.

"}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

find will crawl a series of directories and lists files matching given criteria.

For example, to look for the file named one.txt:

$ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

$ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

"}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "

The next chapter will teach you how to interact with files and directories.

"}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

To transfer files from and to the HPC, see the section about transferring files of the HPC manual

"}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

For example, you may see an error when submitting a job script that was edited on Windows:

sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

To fix this problem, you should run the dos2unix command on the file:

$ dos2unix filename\n
"}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

$ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
"}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

  1. Open (\"Read\"): ^R

  2. Save (\"Write Out\"): ^O

  3. Exit: ^X

More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

"}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

For example, to copy a folder with lots of CSV files:

$ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

To copy files to your local computer, you can also use rsync:

$ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

"}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
  1. Download the file /etc/hostname to your local computer.

  2. Upload a file to a subdirectory of your personal $VSC_DATA space.

  3. Create a file named hello.txt and edit it using nano.

Now you have a basic understanding, see next chapter for some more in depth concepts.

"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

"}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

"}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
"}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

(more info soon)

"}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

Use the menu on the left to navigate, or use the search box on the top right.

You are viewing documentation intended for people using Linux.

Use the OS dropdown in the top bar to switch to a different operating system.

Quick links

If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

If you still have any questions, you can contact the UAntwerpen-HPC.

"}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

See also: Running batch jobs.

"}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

"}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

Modules each come with a suffix that describes the toolchain used to install them.

Examples:

Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

You can use module avail [search_text] to see which versions on which toolchains are available to use.

It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

"}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

When incompatible modules are loaded, you might encounter an error like this:

{{ lmod_error }}\n

You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

See also: How do I choose the job modules?

"}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

The 72 hour walltime limit will not be extended. However, you can work around this barrier:

"}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

Try requesting a bit more memory than your proportional share, and see if that solves the issue.

See also: Specifying memory requirements.

"}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

See also: Running interactive jobs.

"}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

"}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

There are a few possible causes why a job can perform worse than expected.

Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

"}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

See also: Multi core jobs/Parallel Computing and Mympirun.

"}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

For example, we have a simple script (./hello.sh):

#!/bin/bash \necho \"hello world\"\n

And we run it like mympirun ./hello.sh --output output.txt.

To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

mympirun --output output.txt ./hello.sh\n
"}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

"}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

When trying to create files, errors like this can occur:

No space left on device\n

The error \"No space left on device\" can mean two different things:

An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

If the problem persists, feel free to contact support.

"}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

"}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

$ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

For more information about chmod or setfacl, see Linux tutorial.

"}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

"}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

Please send an e-mail to hpc@uantwerpen.be that includes:

If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

"}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

MacOS & Linux (on Windows, only the second part is shown):

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

"}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

A Virtual Organisation consists of a number of members and moderators. A moderator can:

One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

See also: Virtual Organisations.

"}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

"}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

"}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

A lot of tasks can be performed without sudo, including installing software in your own account.

Installing software

"}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

Who can I contact?

"}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

"}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

"}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

module load hod\n
"}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

For example, this will work as expected:

$ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

"}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

$ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

"}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

You should occasionally clean this up using hod clean:

$ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

"}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

If you have any questions, or are experiencing problems using HOD, you have a couple of options:

"}, {"location": "MATLAB/", "title": "MATLAB", "text": "

Note

To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

"}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

"}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

$ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

First, we copy the magicsquare.m example that comes with MATLAB to example.m:

cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

To compile a MATLAB program, use mcc -mv:

mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
"}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

"}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

export _JAVA_OPTIONS=\"-Xmx64M\"\n

The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

Another possible issue is that the heap size is too small. This could result in errors like:

Error: Out of memory\n

A possible solution to this is by setting the maximum heap size to be bigger:

export _JAVA_OPTIONS=\"-Xmx512M\"\n
"}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

parpool.m
% specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

See also the parpool documentation.

"}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

MATLAB_LOG_DIR=<OUTPUT_DIR>\n

where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

# create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

You should remove the directory at the end of your job script:

rm -rf $MATLAB_LOG_DIR\n
"}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

"}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

jobscript.sh
#!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
"}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

"}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

$ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

"}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

You can get a list of running VNC servers on a node with

$ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

This only displays the running VNC servers on the login node you run the command on.

To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

$ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

"}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

"}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

So, in our running example, both the source and destination ports are 5906.

"}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

"}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

First, we will set up the SSH tunnel from our workstation to .

Use the settings specified in the sections above:

Execute the following command to set up the SSH tunnel.

ssh -L 5906:localhost:12345  vsc20167@login.hpc.uantwerpen.be\n

Replace the source port 5906, destination port 12345 and user ID vsc20167 with your own!

With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

"}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

netstat -an | grep -i listen | grep tcp | grep 12345\n

If you see no matching lines, then the port you picked is still available, and you can continue.

If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

$ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
"}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

To do this, run the following command:

$ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

**Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

"}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

Download and setup a VNC client. A good choice is tigervnc. You can start it with the vncviewer command.

Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

When prompted for a password, use the password you used to setup the VNC server.

When prompted for default or empty panel, choose default.

If you have an empty panel, you can reset your settings with the following commands:

xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
"}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

The VNC server can be killed by running

vncserver -kill :6\n

where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

"}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

"}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

See HPC policies for more information on who is entitled to an account.

The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

There are two methods for connecting to UAntwerpen-HPC:

The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

"}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "

Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). Launch a terminal from your desktop's application menu and you will see the bash shell. There are other shells, but most Linux distributions use bash by default.

"}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

\"Secure\" means that:

  1. the User is authenticated to the System; and

  2. the System is authenticated to the User; and

  3. all data is encrypted during transfer.

OpenSSH is a FREE implementation of the SSH connectivity protocol. Linux comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

$ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

To access the clusters and transfer your files, you will use the following commands:

  1. ssh-keygen: to generate the SSH key pair (public + private key);

  2. ssh: to open a shell on a remote machine;

  3. sftp: a secure equivalent of ftp;

  4. scp: a secure equivalent of the remote copy command rcp.

"}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

ls ~/.ssh\n

If a key-pair is already available, you would normally get:

authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

Otherwise, the command will show:

ls: .ssh: No such file or directory\n

You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

You will need to generate a new key pair, when:

  1. you don't have a key pair yet

  2. you forgot the passphrase protecting your private key

  3. your private key was compromised

  4. your key pair is too short or not the right type

For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

ssh-keygen -t rsa -b 4096\n

This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

Without your key pair, you won't be able to apply for a personal VSC account.

"}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

Most recent Unix derivatives include by default an SSH agent (\"gnome-keyring-daemon\" in most cases) to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

ssh-add\n

Tip

Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

Check that your key is available from the keyring with:

ssh-add -l\n

After these changes the key agent will keep your SSH key to connect to the clusters as usual.

Tip

You should execute ssh-add command again if you generate a new SSH key.

Visit https://wiki.gnome.org/Projects/GnomeKeyring/Ssh for more information.

"}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

Visit https://account.vscentrum.be/

You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

Click Confirm

You will now be taken to the authentication page of your institute.

The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

"}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

This file has been stored in the directory \"~/.ssh/\".

After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

"}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

Within one day, you should receive a Welcome e-mail with your VSC account details.

Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

"}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

  1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

  2. Go to https://account.vscentrum.be/django/account/edit

  3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

  4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

  5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

"}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

A typical Computation workflow will be:

  1. Connect to the UAntwerpen-HPC

  2. Transfer your files to the UAntwerpen-HPC

  3. Compile your code and test it

  4. Create a job script

  5. Submit your job

  6. Wait while

    1. your job gets into the queue

    2. your job gets executed

    3. your job finishes

  7. Move your results

We'll take you through the different tasks one by one in the following chapters.

"}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

"}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

"}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

$ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

To use AlphaFold, you should load a particular module, for example:

module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

Warning

When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

$ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

As of writing this documentation the latest version is 20230310.

Info

The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

"}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

Use newest version

Do not forget to replace 20230310 with a more up to date version if available.

"}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

AlphaFold provides a script called run_alphafold.py

A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

For more information about the script and options see this section in the official README.

READ README

It is strongly advised to read the official README provided by DeepMind before continuing.

"}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

Info

Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

"}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

Using --db_preset=full_dbs, the following runtime data was collected:

This highlights a couple of important attention points:

With --db_preset=casp14, it is clearly more demanding:

This highlights the difference between CPU and GPU performance even more.

"}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

Do not forget to set up the environment (see above: Setting up the environment).

"}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

>sequence_name\n<SEQUENCE>\n

Then run the following command in the same directory:

alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

See AlphaFold output, for information about the outputs.

Info

For more scenarios see the example section in the official README.

"}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

The following two example job scripts can be used as a starting point for running AlphaFold.

The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

To run the job scripts you need to create a file named T1050.fasta with the following content:

>T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

"}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

Swap to the joltik GPU before submitting it:

module swap cluster/joltik\n
AlphaFold-gpu-joltik.sh
#!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
"}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

Jobscript that runs AlphaFold on CPU using 24 cores on one node.

AlphaFold-cpu-doduo.sh
#!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

"}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

"}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

"}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

"}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

# avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
"}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

"}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

Create a job script like:

#!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

Create an example myscript.sh:

#!/bin/bash\n\n# prime factors\nfactor 1234567\n
"}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
#!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

You can download linear_regression.py from the official Tensorflow repository.

"}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

It is also possible to execute MPI jobs within a container, but the following requirements apply:

Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

For example to compile an MPI example:

module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

Example MPI job script:

#!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
"}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
  1. Before starting, you should always check:

  2. Check your computer requirements upfront, and request the correct resources in your batch job script.

  3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

  4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

  5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

  6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

  7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

  8. Submit your job and wait (be patient) ...

  9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

  10. The runtime is limited by the maximum walltime of the queues.

  11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

  12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

  13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

"}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

"}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

"}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

"}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

A typical process looks like:

  1. Copy your software to the login-node of the UAntwerpen-HPC

  2. Start an interactive session on a compute node;

  3. Compile it;

  4. Test it locally;

  5. Generate your job scripts;

  6. Test it on the UAntwerpen-HPC

  7. Run it (in parallel);

We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
"}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

We now list the directory and explore the contents of the \"hello.c\" program:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

We first need to compile this C-file into an executable with the gcc-compiler.

First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

$ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

$ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

It seems to work, now run it on the UAntwerpen-HPC

qsub hello.pbs\n

"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

List the directory and explore the contents of the \"mpihello.c\" program:

$ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

mpihello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

A new file \"hello\" has been created. Note that this program has \"execute\" rights.

Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n
"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

module purge\nmodule load intel\n

Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

mpiicc -o mpihello mpihello.c\nls -l\n

Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n

Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

  1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

  2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

  3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

  4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

"}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

ssh_exchange_identification: read: Connection reset by peer\n
"}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

"}, {"location": "connecting/#connect", "title": "Connect", "text": "

Open up a terminal and enter the following command to connect to the UAntwerpen-HPC.

ssh vsc20167@login.hpc.uantwerpen.be\n

Here, user vsc20167 wants to make a connection to the \"Leibniz\" cluster at University of Antwerp via the login node \"login.hpc.uantwerpen.be\", so replace vsc20167 with your own VSC id in the above command.

The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

In this case, use the -i option for the ssh command to specify the location of your private key. For example:

ssh -i /home/example/my_keys\n

Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

$ pwd\n/user/antwerpen/201/vsc20167\n

Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

$ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

$ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

This directory contains:

  1. This HPC Tutorial (in either a Mac, Linux or Windows version).

  2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

cd examples\n

Tip

Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

Tip

For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

You can exit the connection at anytime by entering:

$ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

tip: Setting your Language right

You may encounter a warning message similar to the following one during connecting:

perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
or any other error message complaining about the locale.

This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier. Open the .bashrc on your local machine with your favourite editor and add the following lines:

$ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

tip: vi

To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

"}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. Linux ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

"}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the UAntwerpen-HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

Open an additional terminal window and check that you're working on your local machine.

$ hostname\n<local-machine-name>\n

If you're still using the terminal that is connected to the UAntwerpen-HPC, close the connection by typing \"exit\" in the terminal window.

For example, we will copy the (local) file \"localfile.txt\" to your home directory on the UAntwerpen-HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc20167\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc20167@login.hpc.uantwerpen.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

$ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc20167@login.hpc.uantwerpen.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

Connect to the UAntwerpen-HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

$ pwd\n/user/antwerpen/201/vsc20167\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-Linux-Antwerpen.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

$ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc20167 Sep 11 09:53 intro-HPC-Linux-Antwerpen.pdf\n

Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

$ scp vsc20167@login.hpc.uantwerpen.be:./docs/intro-HPC-Linux-Antwerpen.pdf .\nintro-HPC-Linux-Antwerpen.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

The file has been copied from the HPC to your local computer.

It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

scp -r dataset vsc20167@login.hpc.uantwerpen.be:scratch\n

If you don't use the -r option to copy a directory, you will run into the following error:

$ scp dataset vsc20167@login.hpc.uantwerpen.be:scratch\ndataset: not a regular file\n
"}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

One easy way of starting a sftp session is

sftp vsc20167@login.hpc.uantwerpen.be\n

Typical and popular commands inside an sftp session are:

cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the UAntwerpen-HPC remote machine) ls Get a list of the files in the current directory on the UAntwerpen-HPC. get fibo.py Copy the file \"fibo.py\" from the UAntwerpen-HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the UAntwerpen-HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the UAntwerpen-HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the UAntwerpen-HPC."}, {"location": "connecting/#using-a-gui", "title": "Using a GUI", "text": "

If you prefer a GUI to transfer files back and forth to the UAntwerpen-HPC, you can use your file browser. Open your file browser and press Ctrl+l

This should open up a address bar where you can enter a URL. Alternatively, look for the \"connect to server\" option in your file browsers menu.

Enter: sftp://vsc20167@login.hpc.uantwerpen.be/ and press enter.

You should now be able to browse files on the UAntwerpen-HPC in your file browser.

"}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

See the section on rsync in chapter 5 of the Linux intro manual.

"}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

ssh ln2.leibniz.uantwerpen.vsc\n
This is also possible the other way around.

If you want to find out which login host you are connected to, you can use the hostname command.

$ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

"}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

Check if any cron script is already set in the current login node with:

crontab -l\n

At this point you can add/edit (with vi editor) any cron script running the command:

crontab -e\n
"}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
 15 5 * * * ~/runscript.sh >& ~/job.out\n

where runscript.sh has these lines in this example:

runscript.sh
#!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

ssh gligar07    # or gligar08\n
"}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

"}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

"}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

Before you use EasyBuild, you need to configure it:

"}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

This is where EasyBuild can find software sources:

EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
"}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

"}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

This is where EasyBuild will install the software (and accompanying modules) to.

For example, to let it use $VSC_DATA/easybuild, use:

export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

"}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

module load EasyBuild\n
"}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

$ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

eb example-1.2.1-foss-2024a.eb --robot\n
"}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

To try to install example v1.2.5 with a different compiler toolchain:

eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
"}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

"}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

module use $EASYBUILD_INSTALLPATH/modules/all\n

It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

"}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

This chapter shows you how to measure:

  1. Walltime
  2. Memory usage
  3. CPU usage
  4. Disk (storage) needs
  5. Network bottlenecks

First, we allocate a compute node and move to our relevant directory:

qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
"}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

Test the time command:

$ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

The walltime can be specified in a job scripts as:

#PBS -l walltime=3:00:00:00\n

or on the command line

qsub -l walltime=3:00:00:00\n

It is recommended to always specify the walltime for a job.

"}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

First compile the program on your machine and then test it for 1 GB:

$ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
"}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

$ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

"}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

$ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

$ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

Whereby:

  1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
  2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
  3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
  4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

Monitor will write the CPU usage and memory consumption of simulation to standard error.

This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

$ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

$ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

$ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

top

provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

htop

is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

"}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

Sequential or single-node applications:

The maximum amount of physical memory used by the job can be specified in a job script as:

#PBS -l mem=4gb\n

or on the command line

qsub -l mem=4gb\n

This setting is ignored if the number of nodes is not\u00a01.

Parallel or multi-node applications:

When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

#PBS -l pmem=2gb\n

or on the command line

$ qsub -l pmem=2gb\n

(and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

"}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

"}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

$ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

Or if you want to see it in a more readable format, execute:

$ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

Note

Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

In order to specify the number of nodes and the number of processors per node in your job script, use:

#PBS -l nodes=N:ppn=M\n

or with equivalent parameters on the command line

qsub -l nodes=N:ppn=M\n

This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

$ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

And then start to monitor the eat_cpu program:

$ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

We notice that it the program keeps its CPU nicely busy at 100%.

Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

This could also be monitored with the htop command:

htop\n
Example output:
  1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
  2. Develop your parallel program in a smart way.
  3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  4. Correct your request for CPUs in your job script.
"}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

The load averages differ from CPU percentage in two significant ways:

  1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
  2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
"}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

What is the \"optimal load\" rule of thumb?

The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

  1. When you are running computational intensive applications, one application per processor will generate the optimal load.
  2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

The uptime command will show us the average load

$ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

$ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
You can also read it in the htop command.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Profile your software to improve its performance.
  2. Configure your software (e.g., to exactly use the available amount of processors in a node).
  3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
  4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  5. Correct your request for CPUs in your job script.

And then check again.

"}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

Some programs generate intermediate or output files, the size of which may also be a useful metric.

Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

$ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

The monitor tool provides an option (-f) to display the size of one or more files:

$ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

Several actions can be taken, to avoid storage problems:

  1. Be aware of all the files that are generated by your program. Also check out the hidden files.
  2. Check your quota consumption regularly.
  3. Clean up your files regularly.
  4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
  5. Make sure your programs clean up their temporary files after execution.
  6. Move your output results to your own computer regularly.
  7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
"}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

The parameter to add in your job script would be:

#PBS -l ib\n

If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

#PBS -l gbe\n
"}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

$ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

"}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

"}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

$ monitor -p 18749\n

Note that this feature can be (ab)used to monitor specific sub-processes.

"}, {"location": "getting_started/", "title": "Getting Started", "text": "

Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

"}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

"}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
  1. Connect to the login nodes
  2. Transfer your files to the UAntwerpen-HPC
  3. Optional: compile your code and test it
  4. Create a job script and submit your job
  5. Wait for job to be executed
  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

"}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

There are two options to connect

Considering your operating system is Linux, it is recommended to make use of the ssh command in a terminal to get the most flexibility.

Assuming you have already generated SSH keys in the previous step (Getting Access), and that they are in a default location, you should now be able to login by running the following command:

ssh vsc20167@login.hpc.uantwerpen.be\n

User your own VSC account id

Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

Tip

You can also still use the web portal (see shell access on web portal)

Info

When having problems see the connection issues section on the troubleshooting page.

"}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

On your local machine you can run:

curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

scp tensorflow_mnist.py run.sh vsc20167login.hpc.uantwerpen.be:~\n

ssh  vsc20167@login.hpc.uantwerpen.be\n

User your own VSC account id

Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

Info

For more information about transfering files or scp, see tranfer files from/to hpc.

When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

$ ls ~\nrun.sh tensorflow_mnist.py\n

When you do not see these files, make sure you uploaded the files to your home directory.

"}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

Our job script looks like this:

run.sh

#!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
As you can see this job script will run the Python script named tensorflow_mnist.py.

The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

module swap cluster/{{ othercluster }}\n

Tip

When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

$ qsub run.sh\n433253.leibniz\n

This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

Make sure you understand what the module command does

Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

For detailed information about module commands, read the running batch jobs chapter.

"}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

You can get an overview of the active jobs using the qstat command:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

Eventually, after entering qstat again you should see that your job has started running:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

If you don't see your job in the output of the qstat command anymore, your job has likely completed.

Read this section on how to interpret the output.

"}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

When your job finishes it generates 2 output files:

By default located in the directory where you issued qsub.

In our example when running ls in the current directory you should see 2 new files:

Info

run.sh.e433253.leibniz should be empty (no errors or warnings).

Use your own job ID

Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

When examining the contents of run.sh.o433253.leibniz you will see something like this:

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

Warning

When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

"}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "

For more examples see Program examples and Job script examples

"}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

module swap cluster/joltik\n

To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

module swap cluster/accelgor\n

Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

"}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

"}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

See https://www.ugent.be/hpc/en/infrastructure.

"}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

There are 2 main ways to ask for GPUs as part of a job:

Some background:

"}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

Some important attention points:

"}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

Use module avail to check for centrally installed software.

The subsections below only cover a couple of installed software packages, more are available.

"}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

Please consult module avail GROMACS for a list of installed versions.

"}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

Please consult module avail Horovod for a list of installed versions.

Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

"}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

Please consult module avail PyTorch for a list of installed versions.

"}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

Please consult module avail TensorFlow for a list of installed versions.

Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

"}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
#!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
"}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

Please consult module avail AlphaFold for a list of installed versions.

For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

"}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

"}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

"}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

module swap cluster/donphan\n

Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

"}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

Some limits are in place for this cluster:

In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

"}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

"}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

\"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

"}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

The UAntwerpen-HPC consists of:

In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

The UAntwerpen-HPC currently consists of:

Leibniz:

  1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

  2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

  3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

  4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

  5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

  6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

  7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

Vaughan:

  1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

The nodes are connected using an InfiniBand HDR100 network.

All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

Two tools perform the Job management and job scheduling:

  1. TORQUE: a resource manager (based on PBS);

  2. Moab: job scheduler and management tools.

For maintenance and monitoring, we use:

  1. Ganglia: monitoring software;

  2. Icinga and Nagios: alert manager.

"}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

The HPC infrastructure is not a magic computer that automatically:

  1. runs your PC-applications much faster for bigger problems;

  2. develops your applications;

  3. solves your bugs;

  4. does your thinking;

  5. ...

  6. allows you to play games even faster.

The UAntwerpen-HPC does not replace your desktop computer.

"}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

"}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

"}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

The two parallel programming paradigms most used in HPC are:

Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

"}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

"}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

Supported and commonly used compilers are GCC, clang, J2EE and Intel

Commonly used software packages are:

Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

"}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

"}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

A typical workflow looks like:

  1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

  2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

  3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

  4. Create a job script and submit your job (see Running batch jobs)

  5. Get some coffee and be patient:

    1. Your job gets into the queue

    2. Your job gets executed

    3. Your job finishes

  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

"}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

Do not hesitate to contact the UAntwerpen-HPC staff for any help.

  1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

"}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

simple_jobscript.sh
#!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
"}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

Here's an example of a single-core job script:

single_core.sh
#!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
  1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

  2. A module for Python 3.6 is loaded, see also section Modules.

  3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

  4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

  5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

"}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

Here's an example of a multi-core job script that uses mympirun:

multi_core.sh
#!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

"}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

timeout.sh
#!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

example_program.sh
#!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
"}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

"}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

"}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

We can see all available versions of the SciPy module by using module avail SciPy-bundle:

$ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

$ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

"}, {"location": "known_issues/", "title": "Known issues", "text": "

This page provides details on a couple of known problems, and the workarounds that are available for them.

If you have any questions related to these issues, please contact the UAntwerpen-HPC.

"}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

This error means that an internal problem has occurred in OpenMPI.

"}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

"}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

"}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

A workaround as been implemented in mympirun (version 5.4.0).

Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

module load vsc-mympirun\n

and launch your MPI application using the mympirun command.

For more information, see the mympirun documentation.

"}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
"}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

"}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

There are two important motivations to engage in parallel programming.

  1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

  2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

Tip

You can request more nodes/cores by adding following line to your run script.

#PBS -l nodes=2:ppn=10\n
This queues a job that claims 2 nodes and 10 cores.

Warning

Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

"}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

Go to the example directory:

cd ~/examples/Multi-core-jobs-Parallel-Computing\n

Note

If the example directory is not yet present, copy it to your home directory:

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

Study the example first:

T_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

And compile it (whilst including the thread library) and run and test it on the login-node:

$ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Now, run it on the cluster and check the output:

$ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Tip

If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

"}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

Here is the general code structure of an OpenMP program:

#include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

"}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

"}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

omp1.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

Now run it in the cluster and check the result again.

$ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
"}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

omp2.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

omp3.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

There are a host of other directives you can issue using OpenMP.

Some other clauses of interest are:

  1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

  2. nowait: threads will not wait until everybody is finished

  3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

  4. if: allows you to parallelise only if a certain condition is met

  5. ...\u00a0and a host of others

Tip

If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

"}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

Study the MPI-programme and the PBS-file:

mpi_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
mpi_hello.pbs
#!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

and compile it:

$ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

Run the parallel program:

$ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

Tip

If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

"}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

The \"Worker framework\" has been developed to address this issue.

It can handle many small jobs determined by:

parameter variations

i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

job arrays

i.e., each individual job got a unique numeric identifier.

Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

"}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/par_sweep\n

Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

$ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

par_sweep/weather
#!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

A job script that would run this as a job for the first parameters (p01) would then look like:

par_sweep/weather_p01.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

To submit the job, the user would use:

 $ qsub weather_p01.pbs\n
However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

$ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

In order to make our PBS generic, the PBS file can be modified as follows:

par_sweep/weather.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

Note that:

  1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

  2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

  3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

$ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

Warning

When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

module swap env/slurm/donphan\n

instead of

module swap cluster/donphan\n
We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

"}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/job_array\n

As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

The following bash script would submit these jobs all one by one:

#!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

This, as said before, could be disturbing for the job scheduler.

Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

The details are

  1. a job is submitted for each number in the range;

  2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

  3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

The job could have been submitted using:

qsub -t 1-100 my_prog.pbs\n

The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

A typical job script for use with job arrays would look like this:

job_array/job_array.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

$ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

job_array/test_set
#!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

job_array/test_set.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

Note that

  1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

  2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

The job is now submitted as follows:

$ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

$ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
"}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

Often, an embarrassingly parallel computation can be abstracted to three simple steps:

  1. a preparation phase in which the data is split up into smaller, more manageable chunks;

  2. on these chunks, the same algorithm is applied independently (these are the work items); and

  3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

cd ~/examples/Multi-job-submission/map_reduce\n

The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

First study the scripts:

map_reduce/pre.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
map_reduce/post.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

Then one can submit a MapReduce style job as follows:

$ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

"}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

The \"Worker Framework\" will be effective when

  1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

  2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

"}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

tail -f run.pbs.log433253.leibniz\n

Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

watch -n 60 wsummarize run.pbs.log433253.leibniz\n

This will summarise the log file every 60 seconds.

"}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

"}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

wresume -jobid 433253.leibniz\n

This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

wresume -jobid 433253.leibniz -retry\n

By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

"}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

$ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
"}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

to check for the available versions of worker, use the following command:

$ module avail worker\n
  1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

"}, {"location": "mympirun/", "title": "Mympirun", "text": "

mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

"}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

Before using mympirun, we first need to load its module:

module load vsc-mympirun\n

As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

"}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

"}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

This is the most commonly used option for controlling the number of processing.

The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

$ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
"}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

See vsc-mympirun README for a detailed explanation of these options.

"}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

$ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
"}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

"}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

"}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

Other useful OpenFOAM documentation:

"}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

"}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

$ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

module load OpenFOAM/11-foss-2023a\n
"}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

source $FOAM_BASH\n
"}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

"}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

unset $FOAM_SIGFPE\n

Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

"}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

After running the simulation, some post-processing steps are typically performed:

Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

"}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

"}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

"}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

See Basic usage for how to get started with mympirun.

To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

"}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

number of processor directories = 4 is not equal to the number of processors = 16\n

In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

See Controlling number of processes to control the number of process mympirun will start.

This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

To visualise the processor domains, use the following command:

mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

and then load the VTK files generated in the VTK folder into ParaView.

"}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

"}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

See https://cfd.direct/openfoam/user-guide/compiling-applications/.

"}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

OpenFOAM_damBreak.sh
#!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
"}, {"location": "program_examples/", "title": "Program examples", "text": "

If you have not done so already copy our examples to your home directory by running the following command:

 cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

~(tilde) refers to your home directory, the directory you arrive by default when you login.

Go to our examples:

cd ~/examples/Program-examples\n

Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

  1. 01_Python

  2. 02_C_C++

  3. 03_Matlab

  4. 04_MPI_C

  5. 05a_OMP_C

  6. 05b_OMP_FORTRAN

  7. 06_NWChem

  8. 07_Wien2k

  9. 08_Gaussian

  10. 09_Fortran

  11. 10_PQS

The above 2 OMP directories contain the following examples:

C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

Compile by any of the following commands:

Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

Be invited to explore the examples.

"}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

Remember to substitute the usernames, login nodes, file names, ...for your own.

Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

"}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

"}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

$ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

Initially there will be only one RHEL 9 login node. As needed a second one will be added.

When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

"}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

This includes (per user):

For more intensive tasks you can use the interactive and debug clusters through the web portal.

"}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

However, there will be impact on the availability of software that is made available via modules.

Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

This includes all software installations on top of a compiler toolchain that is older than:

(or another toolchain with a year-based version older than 2023a)

The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

"}, {"location": "rhel9/#planning", "title": "Planning", "text": "

We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

We will keep this page up to date when more specific dates have been planned.

Warning

This planning below is subject to change, some clusters may get migrated later than originally planned.

Please check back regularly.

"}, {"location": "rhel9/#questions", "title": "Questions", "text": "

If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

"}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

"}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

"}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

"}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

In order to administer the active software and their environment variables, the module system has been developed, which:

  1. Activates or deactivates software packages and their dependencies.

  2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

  3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

  4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

  5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

This is all managed with the module command, which is explained in the next sections.

There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

"}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

module available\n

It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

This will give some output such as:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

This gives a full list of software packages that can be loaded.

The casing of module names is important: lowercase and uppercase letters matter in module names.

"}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

E.g., foss/2024a is the first version of the foss toolchain in 2024.

The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

"}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

To \"activate\" a software package, you load the corresponding module file using the module load command:

module load example\n

This will load the most recent version of example.

For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

**However, you should specify a particular version to avoid surprises when newer versions are installed:

module load secondexample/2.7-intel-2016b\n

The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

Modules need not be loaded one by one; the two module load commands can be combined as follows:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

"}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

$ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

You can also just use the ml command without arguments to list loaded modules.

It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

"}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

$ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

To unload the secondexample module, you can also use ml -secondexample.

Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

"}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

module purge\n

However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

"}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

module load example\n

rather than

module load example/1.2.3\n

Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

Consider the following example modules:

$ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

Let's now generate a version conflict with the example module, and see what happens.

$ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

Note: A module swap command combines the appropriate module unload and module load commands.

"}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

With the module spider command, you can search for modules:

$ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

It's also possible to get detailed information about a specific module:

$ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
"}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

To get a list of all possible commands, type:

module help\n

Or to get more information about one specific module package:

$ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
"}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

In each module command shown below, you can replace module with ml.

First, load all modules you want to include in the collections:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

Now store it in a collection using module save. In this example, the collection is named my-collection.

module save my-collection\n

Later, for example in a jobscript or a new session, you can load all these modules with module restore:

module restore my-collection\n

You can get a list of all your saved collections with the module savelist command:

$ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

To get a list of all modules a collection will load, you can use the module describe command:

$ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

To remove a collection, remove the corresponding file in $HOME/.lmod.d:

rm $HOME/.lmod.d/my-collection\n
"}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

To see how a module would change the environment, you can use the module show command:

$ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

It's also possible to use the ml show command instead: they are equivalent.

Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

"}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

To check how much jobs are running in what queues, you can use the qstat -q command:

$ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

"}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

You can also get this information in text form (per cluster separately) with the pbsmon command:

$ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

"}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

First go to the directory with the first examples by entering the command:

cd ~/examples/Running-batch-jobs\n

Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

List and check the contents with:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

  1. The Perl script calculates the first 30 Fibonacci numbers.

  2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

On the command line, you would run this using:

$ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

The job script contains a description of the job by specifying the command that need to be executed on the compute node:

fibo.pbs
#!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

$ qsub fibo.pbs\n433253.leibniz\n

The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

Your job is now waiting in the queue for a free workernode to start on.

Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

After your job was started, and ended, check the contents of the directory:

$ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

Explore the contents of the 2 new files:

$ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

"}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

"}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

qstat 12345\n

To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

::: prompt :::

This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

::: prompt :::

To show on which compute nodes your job is running, at least, when it is running:

qstat -n 12345\n

To remove a job from the queue so that it will not run, or to stop a job that is already running.

qdel 12345\n

When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

$ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

Here:

Job ID the job's unique identifier

Name the name of the job

User the user that owns the job

Time Use the elapsed walltime for the job

Queue the queue the job is in

The state S can be any of the following:

State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

"}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

And to get the full detail of all the jobs, which are in the system:

::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

135 eligible jobs

blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

There are 3 categories, the , and jobs.

Active jobs

are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

Started

attempting to start, performing pre-start tasks

Running

currently executing the user application

Suspended

has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

Cancelling

has been cancelled, in process of cleaning up

Eligible jobs

are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

Blocked jobs

are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

Idle

when the job violates a fairness policy

Userhold

or systemhold when it is user or administrative hold

Batchhold

when the requested resources are not available or the resource manager has repeatedly failed to start the job

Deferred

when a temporary hold when the job has been unable to start after a specified number of attempts

Notqueued

when scheduling daemon is unavailable

"}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

"}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

qsub -l walltime=2:30:00 ...\n

For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

qsub -l mem=4gb ...\n

The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

qsub -l nodes=5:ppn=2 ...\n

The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

qsub -l nodes=1:westmere\n

The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

These options can either be specified on the command line, e.g.

qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

Note that the resources requested on the command line will override those specified in the PBS file.

"}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

To get a list of all properties defined for all nodes, enter

::: prompt :::

This list will also contain properties referring to, e.g., network components, rack number, etc.

"}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

When you navigate to that directory and list its contents, you should see them:

$ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

Inspect the generated output and error files:

$ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
"}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

::: prompt :::

Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

"}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

#PBS -m b \n#PBS -m e \n#PBS -m a\n

or

#PBS -m abe\n

These options can also be specified on the command line. Try it and see what happens:

qsub -m abe fibo.pbs\n

The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

qsub -m b -M john.smith@example.com fibo.pbs\n

will send an e-mail to john.smith@example.com when the job begins.

"}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

So the following example might go wrong:

$ qsub job1.sh\n$ qsub job2.sh\n

You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

$ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

afterok means \"After OK\", or in other words, after the first job successfully completed.

It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

  1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

"}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

The syntax for qsub for submitting an interactive PBS job is:

$ qsub -I <... pbs directives ...>\n
"}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

Tip

Find the code in \"~/examples/Running_interactive_jobs\"

First of all, in order to know on which computer you're working, enter:

$ hostname -f\nln2.leibniz.uantwerpen.vsc\n

This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

The most basic way to start an interactive job is the following:

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

There are two things of note here.

  1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

  2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

In order to know on which compute-node you're working, enter again:

$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

The computer \"r1c02cn3\" stands for:

  1. \"r5\" is rack #5.

  2. \"c3\" is enclosure/chassis #3.

  3. \"cn08\" is compute node #08.

With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

$ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

You can exit the interactive session with:

$ exit\n

Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

You can work for 3 hours by:

qsub -I -l walltime=03:00:00\n

If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

"}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

An X Window server is packaged by default on most Linux distributions. If you have a graphical user interface this generally means that you are using an X Window server.

The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

"}, {"location": "running_interactive_jobs/#connect-with-x-forwarding", "title": "Connect with X-forwarding", "text": "

In order to get the graphical output of your application (which is running on a compute node on the UAntwerpen-HPC) transferred to your personal screen, you will need to reconnect to the UAntwerpen-HPC with X-forwarding enabled, which is done with the \"-X\" option.

First exit and reconnect to the UAntwerpen-HPC with X-forwarding enabled:

$ exit\n$ ssh -X vsc20167@login.hpc.uantwerpen.be\n$ hostname -f\nln2.leibniz.uantwerpen.vsc\n

We first check whether our GUIs on the login node are decently forwarded to your screen on your local machine. An easy way to test it is by running a small X-application on the login node. Type:

$ xclock\n

And you should see a clock appearing on your screen.

You can close your clock and connect further to a compute node with again your X-forwarding enabled:

$ qsub -I -X\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n$ xclock\n

and you should see your clock again.

"}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

Now run the message program:

cd ~/examples/Running_interactive_jobs\n./message.py\n

You should see the following message appearing.

Click any button and see what happens.

-----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
"}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

::: prompt :::

And start the MATLAB interactive environment:

::: prompt :::

And start the fibo2.m program in the command window:

::: prompt fx >> :::

::: center :::

And see the displayed calculations, ...

::: center :::

as well as the nice \"plot\" appearing:

::: center :::

You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

::: prompt fx >> :::

"}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

"}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

First go to the directory:

cd ~/examples/Running_jobs_with_input_output_data\n

Note

If the example directory is not yet present, copy it to your home directory:

```

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

List and check the contents with:

$ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

file1.py
#!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

The code of the Python script, is self explanatory:

  1. In step 1, we write something to the file hello.txt in the current directory.

  2. In step 2, we write some text to stdout.

  3. In step 3, we write to stderr.

Check the contents of the first job script:

file1a.pbs
#!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

Submit it:

qsub file1a.pbs\n

After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

$ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

Some observations:

  1. The file Hello.txt was created in the current directory.

  2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

  3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

Inspect their contents ...\u00a0and remove the files

$ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

Tip

Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

"}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

Check the contents of the job script and execute it.

file1b.pbs
#!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

Inspect the contents again ...\u00a0and remove the generated files:

$ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

"}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

file1c.pbs
#!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
"}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

"}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

The following locations are available:

Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

We elaborate more on the specific function of these locations in the following sections.

"}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

The operating system also creates a few files and folders here to manage your account. Examples are:

File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

"}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

"}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

Each type of scratch has its own use:

Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

"}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

::: prompt ---------------------------------------------------------- Your quota is:

Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

:::

Make sure to regularly check these numbers at log-in!

The rules are:

  1. You will only receive a warning when you have reached the soft limit of either quota.

  2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

  3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

"}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

Tip

Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

  1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

  2. repeat this action 30.000 times;

  3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

$ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
"}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

Tip

Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

In this exercise, you will

  1. Generate the file \"primes_1.txt\" again as in the previous exercise;

  2. open the file;

  3. read it line by line;

  4. calculate the average of primes in the line;

  5. count the number of primes found per line;

  6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

Check the Python and the PBS file, and submit the job:

$ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
"}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

that can be made available to each individual UAntwerpen-HPC user.

The quota of disk space and number of files for each UAntwerpen-HPC user is:

HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

Tip

The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

Tip

"}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

or on the UAntwerp clusters

$ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

$ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

$ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

$ du -s\n5632 .\n$ du -s -h\n

If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

$ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

$ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

Try:

$ tree -s -d\n

However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

"}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

To change the group of a directory and it's underlying directories and files, you can use:

chgrp -R groupname directory\n
"}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
  1. Get the group name you want to belong to.

  2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

  3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

"}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
  1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

  2. Fill out the group name. This cannot contain spaces.

  3. Put a description of your group in the \"Info\" field.

  4. You will now be a member and moderator of your newly created group.

"}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

"}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

$ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

"}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

"}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

This section will explain how to create, activate, use and deactivate Python virtual environments.

"}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

A Python virtual environment can be created with the following command:

python -m venv myenv      # Create a new virtual environment named 'myenv'\n

This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

Warning

When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

"}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

source myenv/bin/activate                    # Activate the virtual environment\n
"}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

After activating the virtual environment, you can install additional Python packages with pip install:

pip install example_package1\npip install example_package2\n

These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

It is now possible to run Python scripts that use the installed packages in the virtual environment.

Tip

When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

To check if a package is available as a module, use:

module av package_name\n

Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

module show module_name\n

to check which extensions are included in a module (if any).

"}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

example.py
import example_package1\nimport example_package2\n...\n
python example.py\n
"}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

When you are done using the virtual environment, you can deactivate it. To do that, run:

deactivate\n
"}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

pytorch_poutyne.py
import torch\nimport poutyne\n\n...\n

We load a PyTorch package as a module and install Poutyne in a virtual environment:

module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

While the virtual environment is activated, we can run the script without any issues:

python pytorch_poutyne.py\n

Deactivate the virtual environment when you are done:

deactivate\n
"}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

module swap cluster/donphan\nqsub -I\n

After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

Naming a virtual environment

When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
"}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

This section will combine the concepts discussed in the previous sections to:

  1. Create a virtual environment on a specific cluster.
  2. Combine packages installed in the virtual environment with modules.
  3. Submit a job script that uses the virtual environment.

The example script that we will run is the following:

pytorch_poutyne.py
import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

First, we create a virtual environment on the donphan cluster:

module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

jobscript.pbs
#!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

Next, we submit the job script:

qsub jobscript.pbs\n

Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

"}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

For example, if we create a virtual environment on the skitty cluster,

module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

return to the login node by pressing CTRL+D and try to use the virtual environment:

$ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

we are presented with the illegal instruction error. More info on this here

"}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

"}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

There are two main reasons why this error could occur.

  1. You have not loaded the Python module that was used to create the virtual environment.
  2. You loaded or unloaded modules while the virtual environment was activated.
"}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

The following commands illustrate this issue:

$ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
"}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

$ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

The solution is to only modify modules when not in a virtual environment.

"}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

This documentation only covers aspects of using Singularity on the infrastructure.

"}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

If these limitations are a problem for you, please let us know via .

"}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

"}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

When you make Singularity or convert Docker images you have some restrictions:

"}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

"}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

::: prompt :::

Create a job script like:

Create an example myscript.sh:

::: code bash #!/bin/bash

# prime factors factor 1234567 :::

"}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

::: prompt :::

You can download linear_regression.py from the official Tensorflow repository.

"}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

It is also possible to execute MPI jobs within a container, but the following requirements apply:

Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

::: prompt :::

For example to compile an MPI example:

::: prompt :::

Example MPI job script:

"}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

Please make these requests well in advance, several weeks before the start of your course/workshop.

"}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

The title of the course or training can be used in e.g. reporting.

The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

When choosing the nickname, try to make it unique, but this is not enforced nor checked.

"}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

"}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

"}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

The management of the list of students or participants depends if this is a UGent course or a training/workshop.

"}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

"}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

(Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

"}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

Every course directory will always contain the folders:

"}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

Optionally, we can also create these folders:

If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

"}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

"}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

"}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

"}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

"}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

If you would like this for your course, provide more details in your teaching request, including:

We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

"}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

"}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

"}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

"}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

For example:

$ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

"}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

See also the examples below.

"}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

See also the examples below.

"}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

example.sh:

#/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
"}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

Running the following command:

$ qsub --dryrun example.sh -N example\n

will generate this output:

Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

"}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

Similarly to the --dryrun example, we start by running the following command:

$ qsub --debug example.sh -N example\n

which generates this output:

DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

"}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

Below is a list of the most common and useful directives.

Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

TORQUE-related environment variables in batch job scripts.

# Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

"}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

Other reasons why using more cores may not lead to a (significant) speedup include:

More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

"}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

An example of how you can make beneficial use of multiple nodes can be found here.

You can also use MPI in Python, some useful packages that are also available on the HPC are:

We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

"}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

"}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

If you get from your job output an error message similar to this:

=>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

"}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

"}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

If you have errors that look like:

vsc20167@login.hpc.uantwerpen.be: Permission denied\n

or you are experiencing problems with connecting, here is a list of things to do that should help:

  1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

  2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

  3. Your SSH private key may not be in the default location ($HOME/.ssh/id_rsa). There are several ways to deal with this (using one of these is sufficient):

    1. Use the ssh -i (see section Connect) OR;
    2. Use ssh-add (see section Using an SSH agent) OR;
    3. Specify the location of the key in $HOME/.ssh/config. You will need to replace the VSC login id in the User field with your own:
      Host Leibniz\n    Hostname login.hpc.uantwerpen.be\n    IdentityFile /path/to/private/key\n    User vsc20167\n
      Now you can connect with ssh Leibniz.
  4. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

  5. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

  6. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

  7. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

  8. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

Please add -vvv as a flag to ssh like:

ssh -vvv vsc20167@login.hpc.uantwerpen.be\n

and include the output of that command in the message.

"}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \n@     WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! \nSomeone could be\neavesdropping on you right now (man-in-the-middle attack)! \nIt is also possible that a host key has just been changed. \nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:1MNKFTfl1T9sm6tTWAo4sn7zyEfiWFLKbk/mlT+7S5s. \nPlease contact your system administrator. \nAdd correct host key in \u00a0~/.ssh/known_hosts to get rid of this message. \nOffending ECDSA key in \u00a0~/.ssh/known_hosts:21\nECDSA host key for login.hpc.uantwerpen.be has changed and you have requested strict checking.\nHost key verification failed.\n

You will need to remove the line it's complaining about (in the example, line 21). To do that, open ~/.ssh/known_hosts in an editor, and remove the line. This results in ssh \"forgetting\" the system you are connecting to.

Alternatively you can use the command that might be shown by the warning under remove with: and it should be something like this:

ssh-keygen -f \"~/.ssh/known_hosts\" -R \"login.hpc.uantwerpen.be\"\n

If the command is not shown, take the file from the \"Offending ECDSA key in\", and the host name from \"ECDSA host key for\" lines.

After you've done that, you'll need to connect to the UAntwerpen-HPC again. See Warning message when first connecting to new host to verify the fingerprints.

"}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

If you get errors like:

$ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

or

sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

"}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "
$ ssh vsc20167@login.hpc.uantwerpen.be\nThe authenticity of host login.hpc.uantwerpen.be (<IP-adress>) can't be established. \n<algorithm> key fingerprint is <hash>\nAre you sure you want to continue connecting (yes/no)?\n

Now you can check the authenticity by checking if the line that is at the place of the underlined piece of text matches one of the following lines:

{{ opensshFirstConnect }}\n

If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

"}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

Note

Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

"}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

"}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

"}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

vsc20167@ln01[203] $\n

When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

$ echo This is a test\nThis is a test\n

Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

$ ls --help \n$ man ls\n$ info ls\n

(You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

"}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

echo \"Hello! This is my hostname:\" \nhostname\n

You can type both lines at your shell prompt, and the result will be the following:

$ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

$ vi foo\n

or use the following commands:

echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

$ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

Congratulations, you just created and started your first shell script!

A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

$ which bash\n/bin/bash\n

We edit our script and change it with this information:

#!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

chmod +x foo\n

Now you can start your script by simply executing it:

$ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

The same technique can be used for all other scripting languages, like Perl and Python.

Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

"}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

Through this web portal, you can:

More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

"}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

"}, {"location": "web_portal/#login", "title": "Login", "text": "

When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

"}, {"location": "web_portal/#first-login", "title": "First login", "text": "

The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

Please click \"Authorize\" here.

This request will only be made once, you should not see this again afterwards.

"}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

Once logged in, you should see this start page:

This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

"}, {"location": "web_portal/#features", "title": "Features", "text": "

We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

"}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

Here you can:

For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

"}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

"}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

A new browser tab will be opened that shows all your current queued and/or running jobs:

You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

Jobs that are still queued or running can be deleted using the red button on the right.

Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

"}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

Don't forget to actually submit your job to the system via the green Submit button!

"}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

"}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

To exit the shell session, type exit followed by Enter and then close the browser tab.

Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

"}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

"}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

See dedicated page on Jupyter notebooks

"}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

In case of problems with the web portal, it could help to restart the web server running in your VSC account.

You can do this via the Restart Web Server button under the Help menu item:

Of course, this only affects your own web portal session (not those of others).

"}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": ""}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

  1. A graphical remote desktop that works well over low bandwidth connections.

  2. Copy/paste support from client to server and vice-versa.

  3. File sharing from client to server.

  4. Support for sound.

  5. Printer sharing from client to server.

  6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

"}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

"}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

There are two ways to connect to the login node:

"}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

This is the easier way to setup X2Go, a direct connection to the login node.

  1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

  2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

  3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

  4. Set the SSH port (22 by default).

  5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

    1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

  6. Check \"Try autologin\" option.

  7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

    1. [optional]: Set a single application like Terminal instead of XFCE desktop.

  8. [optional]: Change the session icon.

  9. Click the OK button after these changes.

"}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

  1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

  2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

  3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

    1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

    2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

    3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

    4. Click the OK button after these changes.

"}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

X2Go will keep the session open for you (but only if the login node is not rebooted).

"}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

hostname\n

This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

"}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

"}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

"}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

Loads MNIST datasets and trains a neural network to recognize hand-written digits.

Runtime: ~1 min. on 8 cores (Intel Skylake)

See https://www.tensorflow.org/tutorials/quickstart/beginner

"}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

The guide aims to make you familiar with the Linux command line environment quickly.

The tutorial goes through the following steps:

  1. Getting Started
  2. Navigating
  3. Manipulating files and directories
  4. Uploading files
  5. Beyond the basics

Do not forget Common pitfalls, as this can save you some troubleshooting.

"}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

"}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

First, it's important to make a distinction between two different output channels:

  1. stdout: standard output channel, for regular output

  2. stderr: standard error channel, for errors and warnings

"}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

> writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

$ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

>> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

$ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

"}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

< reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

$ find . -name .txt > files\n$ xargs grep banana < files\n

"}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

To redirect the stderr output (warnings, messages), you can use 2>, just like >

$ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

"}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

$ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

"}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

$ ls | wc -l\n    42\n

A common pattern is to pipe the output of a command to less so you can examine or search the output:

$ find . | less\n

Or to look through your command history:

$ history | less\n

You can put multiple pipes in the same line. For example, which cp commands have we run?

$ history | grep cp | less\n

"}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

The shell will expand certain things, including:

  1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

  2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

  3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

  4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

"}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

$ ps -fu $USER\n

To see all the processes:

$ ps -elf\n

To see all the processes in a forest view, use:

$ ps auxf\n

The last two will spit out a lot of data, so get in the habit of piping it to less.

pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

"}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

$ kill 1234\n$ kill $(pgrep misbehaving_process)\n

Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

$ kill -9 1234\n

"}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

To exit top, use q (for 'quit').

For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

"}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

$ ulimit -a\n

"}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

$ wc example.txt\n      90     468     3189   example.txt\n

The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

To only count the number of lines, use wc -l:

$ wc -l example.txt\n      90    example.txt\n

"}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

$ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

"}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

$ cut -f 1 -d ',' mydata.csv\n

"}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

$ sed 's/oldtext/newtext/g' myfile.txt\n

By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

"}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

$ awk '{print $4}' mydata.dat\n

You can use -F ':' to change the delimiter (F for field separator).

The next example is used to sum numbers from a field:

$ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

"}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

However, there are some rules you need to abide by.

Here is a very detailed guide should you need more information.

"}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

#!/bin/sh\n
#!/bin/bash\n
#!/usr/bin/env bash\n

"}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
Or only if a certain variable is bigger than one:
if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

In the initial example, we used -d to test if a directory existed. There are several more checks.

Another useful example, is to test if a variable contains a value (so it's not empty):

if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

the -z will check if the length of the variable's value is greater than zero.

"}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

Let's look at a simple example:

for i in 1 2 3\ndo\necho $i\ndone\n

"}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

"}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

Firstly a useful thing to know for debugging and testing is that you can run any command like this:

command 2>&1 output.log   # one single output file, both output and errors\n

If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

If you want regular and error output separated you can use:

command > output.log 2> output.err  # errors in a separate file\n

this will write regular output to output.log and error output to output.err.

You can then look for the errors with less or search for specific text with grep.

In scripts, you can use:

set -e\n

This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

"}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

"}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

Examples include:

Some recommendations:

"}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

"}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
#!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
"}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

#PBS -l nodes=1:ppn=1 # single-core\n

For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

#PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

We intend to submit it on the long queue:

#PBS -q long\n

We request a total running time of 48 hours (2 days).

#PBS -l walltime=48:00:00\n

We specify a desired name of our job:

#PBS -N FreeSurfer_per_subject-time-longitudinal\n
This specifies mail options:
#PBS -m abe\n

  1. a means mail is sent when the job is aborted.

  2. b means mail is sent when the job begins.

  3. e means mail is sent when the job ends.

Joins error output with regular output:

#PBS -j oe\n

All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

"}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
  1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

  2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

  3. How many files and directories are in /tmp?

  4. What's the name of the 5th file/directory in alphabetical order in /tmp?

  5. List all files that start with t in /tmp.

  6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

  7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

"}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

"}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

If you receive an error message which contains something like the following:

No such file or directory\n

It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

"}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

$ cat some file\nNo such file or directory 'some'\n

Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

$ cat some\\ file\n...\n$ cat \"some file\"\n...\n

This is especially error-prone if you are piping results of find:

$ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

This can be worked around using the -print0 flag:

$ find . -type f -print0 | xargs -0 cat\n...\n

But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

"}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

$ rm -r ~/$PROJETC/*\n

"}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

$ #rm -r ~/$POROJETC/*\n
Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

"}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
$ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

$ chmod +x script_name.sh\n

"}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

If you need help about a certain command, you should consult its so-called \"man page\":

$ man command\n

This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

"}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
  1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

  2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

  3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

  4. basic shell usage

  5. Bash for beginners

  6. MOOC

Please don't hesitate to contact in case of questions or problems.

"}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

"}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

To get help:

  1. use the documentation available on the system, through the help, info and man commands (use q to exit).
    help cd \ninfo ls \nman cp \n
  2. use Google

  3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

"}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

"}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

You use the shell by executing commands, and hitting <enter>. For example:

$ echo hello \nhello \n

You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

To go through previous commands, use <up> and <down>, rather than retyping them.

"}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

$ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

"}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

"}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

At the prompt we also have access to shell variables, which have both a name and a value.

They can be thought of as placeholders for things we need to remember.

For example, to print the path to your home directory, we can use the shell variable named HOME:

$ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

This prints the value of this variable.

"}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

$ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

$ env | sort | grep VSC\n

But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

$ export MYVARIABLE=\"value\"\n

It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

If we then do

$ echo $MYVARIABLE\n

this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

"}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

You can change what your prompt looks like by redefining the special-purpose variable $PS1.

For example: to include the current location in your prompt:

$ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

Note that ~ is short representation of your home directory.

To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

$ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

"}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

This may lead to surprising results, for example:

$ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

To understand what's going on here, see the section on cd below.

The moral here is: be very careful to not use empty variables unintentionally.

Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

The -e option will result in the script getting stopped if any command fails.

The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

"}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

"}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

Basic information about the system you are logged into can be obtained in a variety of ways.

We limit ourselves to determining the hostname:

$ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

And querying some basic information about the Linux kernel:

$ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

"}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "

Next chapter teaches you on how to navigate.

"}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

"}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

To figure out where your quota is being spent, the du (isk sage) command can come in useful:

$ du -sh test\n59M test\n

Do not (frequently) run du on directories where large amounts of data are stored, since that will:

  1. take a long time

  2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

"}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

Software is provided through so-called environment modules.

The most commonly used commands are:

  1. module avail: show all available modules

  2. module avail <software name>: show available modules for a specific software name

  3. module list: show list of loaded modules

  4. module load <module name>: load a particular module

More information is available in section Modules.

"}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

Detailed information is available in section submitting your job.

"}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

Hint: python -c \"print(sum(range(1, 101)))\"

"}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

$ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
$ cp source target\n

This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

$ cp -r sourceDirectory target\n

A last more complicated example:

$ cp -a sourceDirectory target\n

Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
$ mkdir directory\n

which will create a directory with the given name inside the current directory.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
$ mv source target\n

mv will move the source path to the destination path. Works for both directories as files.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

$ rm filename\n
rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

$ rmdir directory\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

  1. User - a particular user (account)

  2. Group - a particular group of users (may be user-specific group with only one member)

  3. Other - other users in the system

The permission types are:

  1. Read - For files, this gives permission to read the contents of a file

  2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

  3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

$ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

$ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

This will give the user otheruser permissions to write to Project_GoldenDragon

Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

See https://linux.die.net/man/1/setfacl for more information.

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

$ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

$ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

$ unzip myfile.zip\n

If we would like to make our own zip archive, we use zip:

$ zip myfiles.zip myfile1 myfile2 myfile3\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

$ tar -xf tarfile.tar\n

Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

$ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

"}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

# cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

$ tar -c source1 source2 source3 -f tarfile.tar\n
"}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
  1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

  2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

  3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

  4. Remove the another/test directory with a single command.

  5. Rename test to test2. Move test2/hostname.txt to your home directory.

  6. Change the permission of test2 so only you can access it.

  7. Create an empty job script named job.sh, and make it executable.

  8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

The next chapter is on uploading files, especially important when using HPC-infrastructure.

"}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

"}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

To print the current directory, use pwd or \\$PWD:

$ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

"}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

A very basic and commonly used command is ls, which can be used to list files and directories.

In its basic usage, it just prints the names of files and directories in the current directory. For example:

$ ls\nafile.txt some_directory \n

When provided an argument, it can be used to list the contents of a directory:

$ ls some_directory \none.txt two.txt\n

A couple of commonly used options include:

If you try to use ls on a file that doesn't exist, you will get a clear error message:

$ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
"}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

To change to a different directory, you can use the cd command:

$ cd some_directory\n

To change back to the previous directory you were in, there's a shortcut: cd -

Using cd without an argument results in returning back to your home directory:

$ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

"}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

The file command can be used to inspect what type of file you're dealing with:

$ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
"}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

There are two special relative paths worth mentioning:

You can also use .. when constructing relative paths, for example:

$ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
"}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

Each file and directory has particular permissions set on it, which can be queried using ls -l.

For example:

$ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

  1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
  2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
  3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
  4. the 3rd part r-- indicates that other users only have read permissions

The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

  1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
  2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

See also the chmod command later in this manual.

"}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

find will crawl a series of directories and lists files matching given criteria.

For example, to look for the file named one.txt:

$ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

$ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

"}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "

The next chapter will teach you how to interact with files and directories.

"}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

To transfer files from and to the HPC, see the section about transferring files of the HPC manual

"}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

For example, you may see an error when submitting a job script that was edited on Windows:

sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

To fix this problem, you should run the dos2unix command on the file:

$ dos2unix filename\n
"}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

$ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
"}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

  1. Open (\"Read\"): ^R

  2. Save (\"Write Out\"): ^O

  3. Exit: ^X

More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

"}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

For example, to copy a folder with lots of CSV files:

$ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

To copy files to your local computer, you can also use rsync:

$ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

"}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
  1. Download the file /etc/hostname to your local computer.

  2. Upload a file to a subdirectory of your personal $VSC_DATA space.

  3. Create a file named hello.txt and edit it using nano.

Now you have a basic understanding, see next chapter for some more in depth concepts.

"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

"}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

"}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
"}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

(more info soon)

"}]} \ No newline at end of file diff --git a/HPC/Antwerpen/Linux/sitemap.xml.gz b/HPC/Antwerpen/Linux/sitemap.xml.gz index 10097fff545..fbc4b1503ad 100644 Binary files a/HPC/Antwerpen/Linux/sitemap.xml.gz and b/HPC/Antwerpen/Linux/sitemap.xml.gz differ diff --git a/HPC/Antwerpen/Linux/useful_linux_commands/index.html b/HPC/Antwerpen/Linux/useful_linux_commands/index.html index 3d73462afe8..d9e0be40dce 100644 --- a/HPC/Antwerpen/Linux/useful_linux_commands/index.html +++ b/HPC/Antwerpen/Linux/useful_linux_commands/index.html @@ -1284,7 +1284,7 @@

How to get started with shell scr
$ vi foo
 

or use the following commands:

-
echo "echo Hello! This is my hostname:" > foo
+
echo "echo 'Hello! This is my hostname:'" > foo
 echo hostname >> foo
 

The easiest ways to run a script is by starting the interpreter and pass @@ -1309,7 +1309,9 @@

How to get started with shell scr /bin/bash

We edit our script and change it with this information:

-
#!/bin/bash echo \"Hello! This is my hostname:\" hostname
+
#!/bin/bash
+echo "Hello! This is my hostname:"
+hostname
 

Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Antwerpen/Windows/search/search_index.json b/HPC/Antwerpen/Windows/search/search_index.json index 1a3cd10fb23..d6e84ad4230 100644 --- a/HPC/Antwerpen/Windows/search/search_index.json +++ b/HPC/Antwerpen/Windows/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

Use the menu on the left to navigate, or use the search box on the top right.

You are viewing documentation intended for people using Windows.

Use the OS dropdown in the top bar to switch to a different operating system.

Quick links

  • Getting Started | Getting Access
  • FAQ | Troubleshooting | Best practices | Known issues

If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

If you still have any questions, you can contact the UAntwerpen-HPC.

"}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

See also: Running batch jobs.

"}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

"}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

Modules each come with a suffix that describes the toolchain used to install them.

Examples:

  • AlphaFold/2.2.2-foss-2021a

  • tqdm/4.61.2-GCCcore-10.3.0

  • Python/3.9.5-GCCcore-10.3.0

  • matplotlib/3.4.2-foss-2021a

Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

You can use module avail [search_text] to see which versions on which toolchains are available to use.

It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

"}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

When incompatible modules are loaded, you might encounter an error like this:

{{ lmod_error }}\n

You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

See also: How do I choose the job modules?

"}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

The 72 hour walltime limit will not be extended. However, you can work around this barrier:

  • Check that all available resources are being used. See also:
    • How many cores/nodes should I request?.
    • My job is slow.
    • My job isn't using any GPUs.
  • Use a faster cluster.
  • Divide the job into more parallel processes.
  • Divide the job into shorter processes, which you can submit as separate jobs.
  • Use the built-in checkpointing of your software.
"}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

Try requesting a bit more memory than your proportional share, and see if that solves the issue.

See also: Specifying memory requirements.

"}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

See also: Running interactive jobs.

"}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

"}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

There are a few possible causes why a job can perform worse than expected.

Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

"}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

See also: Multi core jobs/Parallel Computing and Mympirun.

"}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

For example, we have a simple script (./hello.sh):

#!/bin/bash \necho \"hello world\"\n

And we run it like mympirun ./hello.sh --output output.txt.

To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

mympirun --output output.txt ./hello.sh\n
"}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

"}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

When trying to create files, errors like this can occur:

No space left on device\n

The error \"No space left on device\" can mean two different things:

  • all available storage quota on the file system in question has been used;
  • the inode limit has been reached on that file system.

An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

If the problem persists, feel free to contact support.

"}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

"}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

$ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

For more information about chmod or setfacl, see Linux tutorial.

"}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

"}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

Please send an e-mail to hpc@uantwerpen.be that includes:

  • What software you want to install and the required version

  • Detailed installation instructions

  • The purpose for which you want to install the software

If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

"}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

MacOS & Linux (on Windows, only the second part is shown):

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

"}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

A Virtual Organisation consists of a number of members and moderators. A moderator can:

  • Manage the VO members (but can't access/remove their data on the system).

  • See how much storage each member has used, and set limits per member.

  • Request additional storage for the VO.

One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

See also: Virtual Organisations.

"}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

"}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

"}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

A lot of tasks can be performed without sudo, including installing software in your own account.

Installing software

  • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
  • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
"}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

Who can I contact?

  • General questions regarding HPC-UGent and VSC: hpc@ugent.be

  • HPC-UGent Tier-2: hpc@ugent.be

  • VSC Tier-1 compute: compute@vscentrum.be

  • VSC Tier-1 cloud: cloud@vscentrum.be

"}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

"}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

"}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

module load hod\n
"}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

For example, this will work as expected:

$ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

"}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

$ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

"}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

You should occasionally clean this up using hod clean:

$ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

"}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

If you have any questions, or are experiencing problems using HOD, you have a couple of options:

  • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

  • Contact the UAntwerpen-HPC via hpc@uantwerpen.be

  • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

"}, {"location": "MATLAB/", "title": "MATLAB", "text": "

Note

To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

"}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

"}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

$ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

First, we copy the magicsquare.m example that comes with MATLAB to example.m:

cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

To compile a MATLAB program, use mcc -mv:

mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
"}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

"}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

export _JAVA_OPTIONS=\"-Xmx64M\"\n

The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

Another possible issue is that the heap size is too small. This could result in errors like:

Error: Out of memory\n

A possible solution to this is by setting the maximum heap size to be bigger:

export _JAVA_OPTIONS=\"-Xmx512M\"\n
"}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

parpool.m
% specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

See also the parpool documentation.

"}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

MATLAB_LOG_DIR=<OUTPUT_DIR>\n

where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

# create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

You should remove the directory at the end of your job script:

rm -rf $MATLAB_LOG_DIR\n
"}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

"}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

jobscript.sh
#!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
"}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

"}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

$ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

"}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

You can get a list of running VNC servers on a node with

$ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

This only displays the running VNC servers on the login node you run the command on.

To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

$ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

"}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

"}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

So, in our running example, both the source and destination ports are 5906.

"}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

"}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

First, we will set up the SSH tunnel from our workstation to .

Use the settings specified in the sections above:

  • source port: the port on which the VNC server is running (see Determining the source/destination port);

  • destination host: localhost;

  • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

See for detailed information on how to configure PuTTY to set up the SSH tunnel, by entering the settings in the and fields in SSH tunnel.

With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

"}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

netstat -an | grep -i listen | grep tcp | grep 12345\n

If you see no matching lines, then the port you picked is still available, and you can continue.

If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

$ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
"}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

To do this, run the following command:

$ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

**Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

"}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file that looks like TurboVNC64-2.1.2.exe (the version number can be different, but the 64 should be in the filename) and execute it.

Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

When prompted for a password, use the password you used to setup the VNC server.

When prompted for default or empty panel, choose default.

If you have an empty panel, you can reset your settings with the following commands:

xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
"}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

The VNC server can be killed by running

vncserver -kill :6\n

where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

"}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

"}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

See HPC policies for more information on who is entitled to an account.

The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

There are two methods for connecting to UAntwerpen-HPC:

  • Using a terminal to connect via SSH.
  • Using the web portal

The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

"}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
  • an SSH public/private key pair can be seen as a lock and a key

  • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

  • the SSH private key is like a physical key: you don't hand it out to other people.

  • anyone who has the key (and the optional password) can unlock the door and log in to the account.

  • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). A typical Windows environment does not come with pre-installed software to connect and run command-line executables on a UAntwerpen-HPC. Some tools need to be installed on your Windows machine first, before we can start the actual work.

"}, {"location": "account/#get-putty-a-free-telnetssh-client", "title": "Get PuTTY: A free telnet/SSH client", "text": "

We recommend to use the PuTTY tools package, which is freely available.

You do not need to install PuTTY, you can download the PuTTY and PuTTYgen executable and run it. This can be useful in situations where you do not have the required permissions to install software on the computer you are using. Alternatively, an installation package is also available.

You can download PuTTY from the official address: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html. You probably want the 64-bits version. If you can install software on your computer, you can use the \"Package files\", if not, you can download and use putty.exe and puttygen.exe in the \"Alternative binary files\" section.

The PuTTY package consists of several components, but we'll only use two:

  1. PuTTY: the Telnet and SSH client itself (to login, see Open a terminal)

  2. PuTTYgen: an RSA and DSA key generation utility (to generate a key pair, see Generate a public/private key pair)

"}, {"location": "account/#generating-a-publicprivate-key-pair", "title": "Generating a public/private key pair", "text": "

Before requesting a VSC account, you need to generate a pair of ssh keys. You need 2 keys, a public and a private key. You can visualise the public key as a lock to which only you have the key (your private key). You can send a copy of your lock to anyone without any problems, because only you can open it, as long as you keep your private key secure. To generate a public/private key pair, you can use the PuTTYgen key generator.

Start PuTTYgen.exe it and follow these steps:

  1. In Parameters (at the bottom of the window), choose \"RSA\" and set the number of bits in the key to 4096.

  2. Click on Generate. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field Public key for pasting into OpenSSH authorized_keys file.

  3. Next, it is advised to fill in the Key comment field to make it easier identifiable afterwards.

  4. Next, you should specify a passphrase in the Key passphrase field and retype it in the Confirm passphrase field. Remember, the passphrase protects the private key against unauthorised use, so it is best to choose one that is not too easy to guess but that you can still remember. Using a passphrase is not required, but we recommend you to use a good passphrase unless you are certain that your computer's hard disk is encrypted with a decent password. (If you are not sure your disk is encrypted, it probably isn't.)

  5. Save both the public and private keys in a folder on your personal computer (We recommend to create and put them in the folder \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\") with the buttons Save public key and Save private key. We recommend using the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the UAntwerpen-HPC clusters.

"}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

It is possible to setup a SSH agent in Windows. This is an optional configuration to help you to keep all your SSH keys (if you have several) stored in the same key ring to avoid to type the SSH key password each time. The SSH agent is also necessary to enable SSH hops with key forwarding from Windows.

Pageant is the SSH authentication agent used in windows. This agent should be available from the PuTTY installation package https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html or as stand alone binary package.

After the installation just start the Pageant application in Windows, this will start the agent in background. The agent icon will be visible from the Windows panel.

At this point the agent does not contain any private key. You should include the private key(s) generated in the previous section Generating a public/private key pair.

  1. Click on Add key

  2. Select the private key file generated in Generating a public/private key pair (\"id_rsa.ppk\" by default).

  3. Enter the same SSH key password used to generate the key. After this step the new key will be included in Pageant to manage the SSH connections.

  4. You can see the SSH key(s) available in the key ring just clicking on View Keys.

  5. You can change PuTTY setup to use the SSH agent. Open PuTTY and check Connection > SSH > Auth > Allow agent forwarding.

Now you can connect to the login nodes as usual. The SSH agent will know which SSH key should be used and you do not have to type the SSH passwords each time, this task is done by Pageant agent automatically.

It is also possible to use WinSCP with Pageant, see https://winscp.net/eng/docs/ui_pageant for more details.

"}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

Visit https://account.vscentrum.be/

You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

Click Confirm

You will now be taken to the authentication page of your institute.

The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

"}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

This file should have been stored in the directory \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\"

After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

"}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

Within one day, you should receive a Welcome e-mail with your VSC account details.

Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

"}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

  1. Create a new public/private SSH key pair from Putty. Repeat the process described in section\u00a0Generate a public/private key pair.

  2. Go to https://account.vscentrum.be/django/account/edit

  3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

  4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

  5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

"}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

A typical Computation workflow will be:

  1. Connect to the UAntwerpen-HPC

  2. Transfer your files to the UAntwerpen-HPC

  3. Compile your code and test it

  4. Create a job script

  5. Submit your job

  6. Wait while

    1. your job gets into the queue

    2. your job gets executed

    3. your job finishes

  7. Move your results

We'll take you through the different tasks one by one in the following chapters.

"}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

"}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

  • AlphaFold website: https://alphafold.com/
  • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
  • AlphaFold FAQ: https://alphafold.com/faq
  • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
  • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
  • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
    • recording available on YouTube
    • slides available here (PDF)
    • see also https://www.vscentrum.be/alphafold
"}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

$ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

To use AlphaFold, you should load a particular module, for example:

module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

Warning

When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

$ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

As of writing this documentation the latest version is 20230310.

Info

The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

"}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

Use newest version

Do not forget to replace 20230310 with a more up to date version if available.

"}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

AlphaFold provides a script called run_alphafold.py

A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

For more information about the script and options see this section in the official README.

READ README

It is strongly advised to read the official README provided by DeepMind before continuing.

"}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

Info

Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

"}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

Using --db_preset=full_dbs, the following runtime data was collected:

  • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
  • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
  • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
  • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

This highlights a couple of important attention points:

  • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
  • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
  • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

With --db_preset=casp14, it is clearly more demanding:

  • On doduo, with 24 cores (1 node): still running after 48h...
  • On joltik, 1 V100 GPU + 8 cores: 4h 48min

This highlights the difference between CPU and GPU performance even more.

"}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

Do not forget to set up the environment (see above: Setting up the environment).

"}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

>sequence_name\n<SEQUENCE>\n

Then run the following command in the same directory:

alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

See AlphaFold output, for information about the outputs.

Info

For more scenarios see the example section in the official README.

"}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

The following two example job scripts can be used as a starting point for running AlphaFold.

The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

To run the job scripts you need to create a file named T1050.fasta with the following content:

>T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

"}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

Swap to the joltik GPU before submitting it:

module swap cluster/joltik\n
AlphaFold-gpu-joltik.sh
#!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
"}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

Jobscript that runs AlphaFold on CPU using 24 cores on one node.

AlphaFold-cpu-doduo.sh
#!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

"}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

"}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

"}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

"}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

# avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
"}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

"}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

Create a job script like:

#!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

Create an example myscript.sh:

#!/bin/bash\n\n# prime factors\nfactor 1234567\n
"}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
#!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

You can download linear_regression.py from the official Tensorflow repository.

"}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

It is also possible to execute MPI jobs within a container, but the following requirements apply:

  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

  • Use modules within the container (install the environment-modules or lmod package in your container)

  • Load the required module(s) before apptainer execution.

  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

For example to compile an MPI example:

module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

Example MPI job script:

#!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
"}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
  1. Before starting, you should always check:

    • Are there any errors in the script?

    • Are the required modules loaded?

    • Is the correct executable used?

  2. Check your computer requirements upfront, and request the correct resources in your batch job script.

    • Number of requested cores

    • Amount of requested memory

    • Requested network type

  3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

  4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

  5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

  6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

  7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

  8. Submit your job and wait (be patient) ...

  9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

  10. The runtime is limited by the maximum walltime of the queues.

  11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

  12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

  13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

"}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

"}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

"}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

"}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

A typical process looks like:

  1. Copy your software to the login-node of the UAntwerpen-HPC

  2. Start an interactive session on a compute node;

  3. Compile it;

  4. Test it locally;

  5. Generate your job scripts;

  6. Test it on the UAntwerpen-HPC

  7. Run it (in parallel);

We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
"}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

We now list the directory and explore the contents of the \"hello.c\" program:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

We first need to compile this C-file into an executable with the gcc-compiler.

First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

$ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

$ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

It seems to work, now run it on the UAntwerpen-HPC

qsub hello.pbs\n

"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

List the directory and explore the contents of the \"mpihello.c\" program:

$ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

mpihello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

A new file \"hello\" has been created. Note that this program has \"execute\" rights.

Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n
"}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

module purge\nmodule load intel\n

Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

mpiicc -o mpihello mpihello.c\nls -l\n

Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

$ ./mpihello\nHello World from Node 0.\n

It seems to work, now run it on the UAntwerpen-HPC.

qsub mpihello.pbs\n

Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

  1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

  2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

  3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

  4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

"}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

  • Use an VPN connection to connect to University of Antwerp the network (recommended).

  • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your University of Antwerp account.

    • While this web connection is active new SSH sessions can be started.

    • Active SSH sessions will remain active even when this web page is closed.

  • Contact your HPC support team (via hpc@uantwerpen.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

ssh_exchange_identification: read: Connection reset by peer\n
"}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

"}, {"location": "connecting/#open-a-terminal", "title": "Open a Terminal", "text": "

You've generated a public/private key pair with PuTTYgen and have an approved account on the VSC clusters. The next step is to setup the connection to (one of) the UAntwerpen-HPC.

In the screenshots, we show the setup for user \"vsc20167\"

to the UAntwerpen-HPC cluster via the login node \"login.hpc.uantwerpen.be\".

  1. Start the PuTTY executable putty.exe in your directory C:\\Program Files (x86)\\PuTTY and the configuration screen will pop up. As you will often use the PuTTY tool, we recommend adding a shortcut on your desktop.

  2. Within the category <Session>, in the field <Host Name>, enter the name of the login node of the cluster (i.e., \"login.hpc.uantwerpen.be\") you want to connect to.

  3. In the category Connection > Data, in the field Auto-login username, put in <vsc20167> , which is your VSC username that you have received by e-mail after your request was approved.

  4. In the category Connection > SSH > Auth, in the field Private key file for authentication click on Browse and select the private key (i.e., \"id_rsa.ppk\") that you generated and saved above.

  5. In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox.

  6. Now go back to <Session>, and fill in \"Leibniz\" in the Saved Sessions field and press Save to store the session information.

  7. Now pressing Open, will open a terminal window and asks for you passphrase.

  8. If this is your first time connecting, you will be asked to verify the authenticity of the login node. Please see section\u00a0Warning message when first connecting to new host on how to do this.

  9. After entering your correct passphrase, you will be connected to the login-node of the UAntwerpen-HPC.

  10. To check you can now \"Print the Working Directory\" (pwd) and check the name of the computer, where you have logged in (hostname):

    $ pwd\n/user/antwerpen/201/vsc20167\n$ hostname -f\nln2.leibniz.uantwerpen.vsc\n
  11. For future PuTTY sessions, just select your saved session (i.e. \"Leibniz\") from the list, Load it and press Open.

Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

$ pwd\n/user/antwerpen/201/vsc20167\n

Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

$ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

$ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

This directory contains:

  1. This HPC Tutorial (in either a Mac, Linux or Windows version).

  2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

cd examples\n

Tip

Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

Tip

For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

You can exit the connection at anytime by entering:

$ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

tip: Setting your Language right

You may encounter a warning message similar to the following one during connecting:

perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
or any other error message complaining about the locale.

This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

"}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back.

"}, {"location": "connecting/#winscp", "title": "WinSCP", "text": "

To transfer files to and from the cluster, we recommend the use of WinSCP, a graphical file management tool which can transfer files using secure protocols such as SFTP and SCP. WinSCP is freely available from http://www.winscp.net.

To transfer your files using WinSCP,

  1. Open the program

  2. The Login menu is shown automatically (if it is closed, click New Session to open it again). Fill in the necessary fields under Session

    1. Click New Site.

    2. Enter \"login.hpc.uantwerpen.be\" in the Host name field.

    3. Enter your \"vsc-account\" in the User name field.

    4. Select SCP as the file protocol.

    5. Note that the password field remains empty.

    1. Click Advanced....

    2. Click SSH > Authentication.

    3. Select your private key in the field Private key file.

  3. Press the Save button, to save the session under Session > Sites for future access.

  4. Finally, when clicking on Login, you will be asked for your key passphrase.

The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

Make sure the fingerprint in the alert matches one of the following:

- ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

Now, try out whether you can transfer an arbitrary file from your local machine to the HPC and back.

"}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

See the section on rsync in chapter 5 of the Linux intro manual.

"}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

ssh ln2.leibniz.uantwerpen.vsc\n
This is also possible the other way around.

If you want to find out which login host you are connected to, you can use the hostname command.

$ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

  • screen
  • tmux
"}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

Check if any cron script is already set in the current login node with:

crontab -l\n

At this point you can add/edit (with vi editor) any cron script running the command:

crontab -e\n
"}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
 15 5 * * * ~/runscript.sh >& ~/job.out\n

where runscript.sh has these lines in this example:

runscript.sh
#!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

ssh gligar07    # or gligar08\n
"}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

"}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

  • applying custom patches to the software that only you or your group are using

  • evaluating new software versions prior to requesting a central software installation

  • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

"}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

Before you use EasyBuild, you need to configure it:

"}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

This is where EasyBuild can find software sources:

EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
  • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

  • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

"}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

"}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

This is where EasyBuild will install the software (and accompanying modules) to.

For example, to let it use $VSC_DATA/easybuild, use:

export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

"}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

module load EasyBuild\n
"}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

$ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

eb example-1.2.1-foss-2024a.eb --robot\n
"}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

To try to install example v1.2.5 with a different compiler toolchain:

eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
"}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

"}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

module use $EASYBUILD_INSTALLPATH/modules/all\n

It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

"}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

This chapter shows you how to measure:

  1. Walltime
  2. Memory usage
  3. CPU usage
  4. Disk (storage) needs
  5. Network bottlenecks

First, we allocate a compute node and move to our relevant directory:

qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
"}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

Test the time command:

$ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

The walltime can be specified in a job scripts as:

#PBS -l walltime=3:00:00:00\n

or on the command line

qsub -l walltime=3:00:00:00\n

It is recommended to always specify the walltime for a job.

"}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

First compile the program on your machine and then test it for 1 GB:

$ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
"}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

$ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

"}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

$ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

$ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

Whereby:

  1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
  2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
  3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
  4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

Monitor will write the CPU usage and memory consumption of simulation to standard error.

This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

$ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

$ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

$ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

top

provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

htop

is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

"}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

Sequential or single-node applications:

The maximum amount of physical memory used by the job can be specified in a job script as:

#PBS -l mem=4gb\n

or on the command line

qsub -l mem=4gb\n

This setting is ignored if the number of nodes is not\u00a01.

Parallel or multi-node applications:

When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

#PBS -l pmem=2gb\n

or on the command line

$ qsub -l pmem=2gb\n

(and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

"}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

"}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

$ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

Or if you want to see it in a more readable format, execute:

$ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

Note

Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

In order to specify the number of nodes and the number of processors per node in your job script, use:

#PBS -l nodes=N:ppn=M\n

or with equivalent parameters on the command line

qsub -l nodes=N:ppn=M\n

This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

$ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

And then start to monitor the eat_cpu program:

$ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

We notice that it the program keeps its CPU nicely busy at 100%.

Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

This could also be monitored with the htop command:

htop\n
Example output:
  1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
  2. Develop your parallel program in a smart way.
  3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  4. Correct your request for CPUs in your job script.
"}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

The load averages differ from CPU percentage in two significant ways:

  1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
  2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
"}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

What is the \"optimal load\" rule of thumb?

The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

  1. When you are running computational intensive applications, one application per processor will generate the optimal load.
  2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

"}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

The uptime command will show us the average load

$ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

$ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
You can also read it in the htop command.

"}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

But how can you maximise?

  1. Profile your software to improve its performance.
  2. Configure your software (e.g., to exactly use the available amount of processors in a node).
  3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
  4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
  5. Correct your request for CPUs in your job script.

And then check again.

"}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

Some programs generate intermediate or output files, the size of which may also be a useful metric.

Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

$ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

The monitor tool provides an option (-f) to display the size of one or more files:

$ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

Several actions can be taken, to avoid storage problems:

  1. Be aware of all the files that are generated by your program. Also check out the hidden files.
  2. Check your quota consumption regularly.
  3. Clean up your files regularly.
  4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
  5. Make sure your programs clean up their temporary files after execution.
  6. Move your output results to your own computer regularly.
  7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
"}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

The parameter to add in your job script would be:

#PBS -l ib\n

If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

#PBS -l gbe\n
"}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

$ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

"}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

"}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

$ monitor -p 18749\n

Note that this feature can be (ab)used to monitor specific sub-processes.

"}, {"location": "getting_started/", "title": "Getting Started", "text": "

Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

"}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

"}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
  1. Connect to the login nodes
  2. Transfer your files to the UAntwerpen-HPC
  3. Optional: compile your code and test it
  4. Create a job script and submit your job
  5. Wait for job to be executed
  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

"}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

There are two options to connect

  • Using a terminal to connect via SSH (for power users) (see First Time connection to the UAntwerpen-HPC)
  • Using the web portal

Considering your operating system is Windows, it is recommended to use the web portal.

The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

See shell access when using the web portal, or connection to the UAntwerpen-HPC when using a terminal.

Make sure you can get to a shell access to the UAntwerpen-HPC before proceeding with the next steps.

Info

When having problems see the connection issues section on the troubleshooting page.

"}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

The HPC-UGent web portal provides a file browser that allows uploading files. For more information see the file browser section.

Upload both files (run.sh and tensorflow-mnist.py) to your home directory and go back to your shell.

Info

As an alternative, you can use WinSCP (see our section)

When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

$ ls ~\nrun.sh tensorflow_mnist.py\n

When you do not see these files, make sure you uploaded the files to your home directory.

"}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

Our job script looks like this:

run.sh

#!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
As you can see this job script will run the Python script named tensorflow_mnist.py.

The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

module swap cluster/{{ othercluster }}\n

Tip

When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

$ qsub run.sh\n433253.leibniz\n

This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

Make sure you understand what the module command does

Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

For detailed information about module commands, read the running batch jobs chapter.

"}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

You can get an overview of the active jobs using the qstat command:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

Eventually, after entering qstat again you should see that your job has started running:

$ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

If you don't see your job in the output of the qstat command anymore, your job has likely completed.

Read this section on how to interpret the output.

"}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

When your job finishes it generates 2 output files:

  • One for normal output messages (stdout output channel).
  • One for warning and error messages (stderr output channel).

By default located in the directory where you issued qsub.

In our example when running ls in the current directory you should see 2 new files:

  • run.sh.o433253.leibniz, containing normal output messages produced by job 433253.leibniz;
  • run.sh.e433253.leibniz, containing errors and warnings produced by job 433253.leibniz.

Info

run.sh.e433253.leibniz should be empty (no errors or warnings).

Use your own job ID

Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

When examining the contents of run.sh.o433253.leibniz you will see something like this:

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

Warning

When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

"}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
  • Running interactive jobs
  • Running jobs with input/output data
  • Multi core jobs/Parallel Computing
  • Interactive and debug cluster

For more examples see Program examples and Job script examples

"}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

module swap cluster/joltik\n

To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

module swap cluster/accelgor\n

Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

"}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

"}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

See https://www.ugent.be/hpc/en/infrastructure.

"}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

There are 2 main ways to ask for GPUs as part of a job:

  • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

  • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

Some background:

  • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

  • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

"}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

Some important attention points:

  • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

  • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

  • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

  • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

"}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

Use module avail to check for centrally installed software.

The subsections below only cover a couple of installed software packages, more are available.

"}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

Please consult module avail GROMACS for a list of installed versions.

"}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

Please consult module avail Horovod for a list of installed versions.

Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

"}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

Please consult module avail PyTorch for a list of installed versions.

"}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

Please consult module avail TensorFlow for a list of installed versions.

Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

"}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
#!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
"}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

Please consult module avail AlphaFold for a list of installed versions.

For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

"}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

"}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

  • Interactive jobs (see chapter\u00a0Running interactive jobs)

  • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

  • Jobs requiring few resources

  • Debugging programs

  • Testing and debugging job scripts

"}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

module swap cluster/donphan\n

Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

"}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

Some limits are in place for this cluster:

  • each user may have at most 5 jobs in the queue (both running and waiting to run);

  • at most 3 jobs per user can be running at the same time;

  • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

"}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

"}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

\"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

"}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

The UAntwerpen-HPC consists of:

In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

The UAntwerpen-HPC currently consists of:

Leibniz:

  1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

  2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

  3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

  4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

  5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

  6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

  7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

Vaughan:

  1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

The nodes are connected using an InfiniBand HDR100 network.

All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

Two tools perform the Job management and job scheduling:

  1. TORQUE: a resource manager (based on PBS);

  2. Moab: job scheduler and management tools.

For maintenance and monitoring, we use:

  1. Ganglia: monitoring software;

  2. Icinga and Nagios: alert manager.

"}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

The HPC infrastructure is not a magic computer that automatically:

  1. runs your PC-applications much faster for bigger problems;

  2. develops your applications;

  3. solves your bugs;

  4. does your thinking;

  5. ...

  6. allows you to play games even faster.

The UAntwerpen-HPC does not replace your desktop computer.

"}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

"}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

"}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

The two parallel programming paradigms most used in HPC are:

  • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

  • MPI for distributed memory systems (multiprocessing): on multiple nodes

Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

"}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

"}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

Supported and commonly used compilers are GCC, clang, J2EE and Intel

Commonly used software packages are:

  • in bioinformatics: beagle, Beast, bowtie, MrBayes, SAMtools

  • in chemistry: ABINIT, CP2K, Gaussian, Gromacs, LAMMPS, NWChem, Quantum Espresso, Siesta, VASP

  • in engineering: COMSOL, OpenFOAM, Telemac

  • in mathematics: JAGS, MATLAB, R

  • for visuzalization: Gnuplot, ParaView.

Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

"}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

"}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

A typical workflow looks like:

  1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

  2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

  3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

  4. Create a job script and submit your job (see Running batch jobs)

  5. Get some coffee and be patient:

    1. Your job gets into the queue

    2. Your job gets executed

    3. Your job finishes

  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

"}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

Do not hesitate to contact the UAntwerpen-HPC staff for any help.

  1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

"}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

  • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

  • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

  • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

  • To use a situational parameter, remove one '#' at the beginning of the line.

simple_jobscript.sh
#!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
"}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

Here's an example of a single-core job script:

single_core.sh
#!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
  1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

  2. A module for Python 3.6 is loaded, see also section Modules.

  3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

  4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

  5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

"}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

Here's an example of a multi-core job script that uses mympirun:

multi_core.sh
#!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

"}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

timeout.sh
#!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

example_program.sh
#!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
"}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

"}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

"}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

We can see all available versions of the SciPy module by using module avail SciPy-bundle:

$ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

$ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

"}, {"location": "known_issues/", "title": "Known issues", "text": "

This page provides details on a couple of known problems, and the workarounds that are available for them.

If you have any questions related to these issues, please contact the UAntwerpen-HPC.

  • Operation not permitted error for MPI applications
"}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

This error means that an internal problem has occurred in OpenMPI.

"}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

"}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

"}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

A workaround as been implemented in mympirun (version 5.4.0).

Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

module load vsc-mympirun\n

and launch your MPI application using the mympirun command.

For more information, see the mympirun documentation.

"}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
"}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

"}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

There are two important motivations to engage in parallel programming.

  1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

  2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

Tip

You can request more nodes/cores by adding following line to your run script.

#PBS -l nodes=2:ppn=10\n
This queues a job that claims 2 nodes and 10 cores.

Warning

Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

"}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

Go to the example directory:

cd ~/examples/Multi-core-jobs-Parallel-Computing\n

Note

If the example directory is not yet present, copy it to your home directory:

cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

Study the example first:

T_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

And compile it (whilst including the thread library) and run and test it on the login-node:

$ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Now, run it on the cluster and check the output:

$ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

Tip

If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

"}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

Here is the general code structure of an OpenMP program:

#include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

"}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

"}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

omp1.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

Now run it in the cluster and check the result again.

$ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
"}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

omp2.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

omp3.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

$ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

Now run it in the cluster and check the result again.

$ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
"}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

There are a host of other directives you can issue using OpenMP.

Some other clauses of interest are:

  1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

  2. nowait: threads will not wait until everybody is finished

  3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

  4. if: allows you to parallelise only if a certain condition is met

  5. ...\u00a0and a host of others

Tip

If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

"}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

Study the MPI-programme and the PBS-file:

mpi_hello.c
/*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
mpi_hello.pbs
#!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

and compile it:

$ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

Run the parallel program:

$ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

Tip

If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

"}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

The \"Worker framework\" has been developed to address this issue.

It can handle many small jobs determined by:

parameter variations

i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

job arrays

i.e., each individual job got a unique numeric identifier.

Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

"}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/par_sweep\n

Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

$ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

par_sweep/weather
#!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

A job script that would run this as a job for the first parameters (p01) would then look like:

par_sweep/weather_p01.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

To submit the job, the user would use:

 $ qsub weather_p01.pbs\n
However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

$ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

In order to make our PBS generic, the PBS file can be modified as follows:

par_sweep/weather.pbs
#!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

Note that:

  1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

  2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

  3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

$ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

Warning

When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

module swap env/slurm/donphan\n

instead of

module swap cluster/donphan\n
We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

"}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

First go to the right directory:

cd ~/examples/Multi-job-submission/job_array\n

As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

The following bash script would submit these jobs all one by one:

#!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

This, as said before, could be disturbing for the job scheduler.

Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

The details are

  1. a job is submitted for each number in the range;

  2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

  3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

The job could have been submitted using:

qsub -t 1-100 my_prog.pbs\n

The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

A typical job script for use with job arrays would look like this:

job_array/job_array.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

$ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

job_array/test_set
#!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

job_array/test_set.pbs
#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

Note that

  1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

  2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

The job is now submitted as follows:

$ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

$ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
"}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

Often, an embarrassingly parallel computation can be abstracted to three simple steps:

  1. a preparation phase in which the data is split up into smaller, more manageable chunks;

  2. on these chunks, the same algorithm is applied independently (these are the work items); and

  3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

cd ~/examples/Multi-job-submission/map_reduce\n

The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

First study the scripts:

map_reduce/pre.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
map_reduce/post.sh
#!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

Then one can submit a MapReduce style job as follows:

$ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

"}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

The \"Worker Framework\" will be effective when

  1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

  2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

"}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

tail -f run.pbs.log433253.leibniz\n

Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

watch -n 60 wsummarize run.pbs.log433253.leibniz\n

This will summarise the log file every 60 seconds.

"}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

#!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

"}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

wresume -jobid 433253.leibniz\n

This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

wresume -jobid 433253.leibniz -retry\n

By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

"}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

$ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
"}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

to check for the available versions of worker, use the following command:

$ module avail worker\n
  1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

"}, {"location": "mympirun/", "title": "Mympirun", "text": "

mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

"}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

Before using mympirun, we first need to load its module:

module load vsc-mympirun\n

As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

"}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

"}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

This is the most commonly used option for controlling the number of processing.

The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

$ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
"}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

See vsc-mympirun README for a detailed explanation of these options.

"}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

$ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
"}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

"}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

  • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

    • see also http://openfoam.com/history/
  • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

    • see also https://openfoam.org/download/history/
  • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

"}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

  • OpenFOAM websites:

    • https://openfoam.com

    • https://openfoam.org

    • http://wikki.gridcore.se/foam-extend

  • OpenFOAM user guides:

    • https://www.openfoam.com/documentation/user-guide

    • https://cfd.direct/openfoam/user-guide/

  • OpenFOAM C++ source code guide: https://cpp.openfoam.org

  • tutorials: https://wiki.openfoam.com/Tutorials

  • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

Other useful OpenFOAM documentation:

  • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

  • http://www.dicat.unige.it/guerrero/openfoam.html

"}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

"}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

$ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

module load OpenFOAM/11-foss-2023a\n
"}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

source $FOAM_BASH\n
"}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

"}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

unset $FOAM_SIGFPE\n

Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

"}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

  • generate the mesh;

  • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

After running the simulation, some post-processing steps are typically performed:

  • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

  • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

"}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

"}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

"}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

See Basic usage for how to get started with mympirun.

To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

"}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

number of processor directories = 4 is not equal to the number of processors = 16\n

In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

  • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

  • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

See Controlling number of processes to control the number of process mympirun will start.

This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

To visualise the processor domains, use the following command:

mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

and then load the VTK files generated in the VTK folder into ParaView.

"}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

  • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

  • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

  • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

  • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

  • if the results per individual time step are large, consider setting writeCompression to true;

For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

"}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

See https://cfd.direct/openfoam/user-guide/compiling-applications/.

"}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

OpenFOAM_damBreak.sh
#!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
"}, {"location": "program_examples/", "title": "Program examples", "text": "

If you have not done so already copy our examples to your home directory by running the following command:

 cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

~(tilde) refers to your home directory, the directory you arrive by default when you login.

Go to our examples:

cd ~/examples/Program-examples\n

Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

  1. 01_Python

  2. 02_C_C++

  3. 03_Matlab

  4. 04_MPI_C

  5. 05a_OMP_C

  6. 05b_OMP_FORTRAN

  7. 06_NWChem

  8. 07_Wien2k

  9. 08_Gaussian

  10. 09_Fortran

  11. 10_PQS

The above 2 OMP directories contain the following examples:

C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

Compile by any of the following commands:

Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

Be invited to explore the examples.

"}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

Remember to substitute the usernames, login nodes, file names, ...for your own.

Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

"}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

"}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

$ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

Initially there will be only one RHEL 9 login node. As needed a second one will be added.

When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

"}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

This includes (per user):

  • max. of 2 CPU cores in use
  • max. 8 GB of memory in use

For more intensive tasks you can use the interactive and debug clusters through the web portal.

"}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

However, there will be impact on the availability of software that is made available via modules.

Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

This includes all software installations on top of a compiler toolchain that is older than:

  • GCC(core)/12.3.0
  • foss/2023a
  • intel/2023a
  • gompi/2023a
  • iimpi/2023a
  • gfbf/2023a

(or another toolchain with a year-based version older than 2023a)

The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

"}, {"location": "rhel9/#planning", "title": "Planning", "text": "

We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

We will keep this page up to date when more specific dates have been planned.

Warning

This planning below is subject to change, some clusters may get migrated later than originally planned.

Please check back regularly.

"}, {"location": "rhel9/#questions", "title": "Questions", "text": "

If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

"}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

"}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

"}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

"}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

In order to administer the active software and their environment variables, the module system has been developed, which:

  1. Activates or deactivates software packages and their dependencies.

  2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

  3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

  4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

  5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

This is all managed with the module command, which is explained in the next sections.

There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

"}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

module available\n

It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

This will give some output such as:

$ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

$ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

This gives a full list of software packages that can be loaded.

The casing of module names is important: lowercase and uppercase letters matter in module names.

"}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

E.g., foss/2024a is the first version of the foss toolchain in 2024.

The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

"}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

To \"activate\" a software package, you load the corresponding module file using the module load command:

module load example\n

This will load the most recent version of example.

For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

**However, you should specify a particular version to avoid surprises when newer versions are installed:

module load secondexample/2.7-intel-2016b\n

The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

Modules need not be loaded one by one; the two module load commands can be combined as follows:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

"}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

$ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

You can also just use the ml command without arguments to list loaded modules.

It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

"}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

$ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

To unload the secondexample module, you can also use ml -secondexample.

Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

"}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

module purge\n

However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

"}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

module load example\n

rather than

module load example/1.2.3\n

Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

Consider the following example modules:

$ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

Let's now generate a version conflict with the example module, and see what happens.

$ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

Note: A module swap command combines the appropriate module unload and module load commands.

"}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

With the module spider command, you can search for modules:

$ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

It's also possible to get detailed information about a specific module:

$ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
"}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

To get a list of all possible commands, type:

module help\n

Or to get more information about one specific module package:

$ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
"}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

In each module command shown below, you can replace module with ml.

First, load all modules you want to include in the collections:

module load example/1.2.3 secondexample/2.7-intel-2016b\n

Now store it in a collection using module save. In this example, the collection is named my-collection.

module save my-collection\n

Later, for example in a jobscript or a new session, you can load all these modules with module restore:

module restore my-collection\n

You can get a list of all your saved collections with the module savelist command:

$ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

To get a list of all modules a collection will load, you can use the module describe command:

$ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

To remove a collection, remove the corresponding file in $HOME/.lmod.d:

rm $HOME/.lmod.d/my-collection\n
"}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

To see how a module would change the environment, you can use the module show command:

$ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

It's also possible to use the ml show command instead: they are equivalent.

Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

"}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

To check how much jobs are running in what queues, you can use the qstat -q command:

$ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

"}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

You can also get this information in text form (per cluster separately) with the pbsmon command:

$ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

"}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

First go to the directory with the first examples by entering the command:

cd ~/examples/Running-batch-jobs\n

Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

List and check the contents with:

$ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

  1. The Perl script calculates the first 30 Fibonacci numbers.

  2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

On the command line, you would run this using:

$ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

The job script contains a description of the job by specifying the command that need to be executed on the compute node:

fibo.pbs
#!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

$ qsub fibo.pbs\n433253.leibniz\n

The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

Your job is now waiting in the queue for a free workernode to start on.

Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

After your job was started, and ended, check the contents of the directory:

$ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

Explore the contents of the 2 new files:

$ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

"}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

"}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

qstat 12345\n

To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

::: prompt :::

This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

::: prompt :::

To show on which compute nodes your job is running, at least, when it is running:

qstat -n 12345\n

To remove a job from the queue so that it will not run, or to stop a job that is already running.

qdel 12345\n

When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

$ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

Here:

Job ID the job's unique identifier

Name the name of the job

User the user that owns the job

Time Use the elapsed walltime for the job

Queue the queue the job is in

The state S can be any of the following:

State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

"}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

And to get the full detail of all the jobs, which are in the system:

::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

135 eligible jobs

blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

There are 3 categories, the , and jobs.

Active jobs

are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

Started

attempting to start, performing pre-start tasks

Running

currently executing the user application

Suspended

has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

Cancelling

has been cancelled, in process of cleaning up

Eligible jobs

are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

Blocked jobs

are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

Idle

when the job violates a fairness policy

Userhold

or systemhold when it is user or administrative hold

Batchhold

when the requested resources are not available or the resource manager has repeatedly failed to start the job

Deferred

when a temporary hold when the job has been unable to start after a specified number of attempts

Notqueued

when scheduling daemon is unavailable

"}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

"}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

qsub -l walltime=2:30:00 ...\n

For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

qsub -l mem=4gb ...\n

The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

qsub -l nodes=5:ppn=2 ...\n

The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

qsub -l nodes=1:westmere\n

The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

These options can either be specified on the command line, e.g.

qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

#!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

Note that the resources requested on the command line will override those specified in the PBS file.

"}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

To get a list of all properties defined for all nodes, enter

::: prompt :::

This list will also contain properties referring to, e.g., network components, rack number, etc.

"}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

When you navigate to that directory and list its contents, you should see them:

$ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

Inspect the generated output and error files:

$ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
"}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

::: prompt :::

Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

"}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

#PBS -m b \n#PBS -m e \n#PBS -m a\n

or

#PBS -m abe\n

These options can also be specified on the command line. Try it and see what happens:

qsub -m abe fibo.pbs\n

The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

qsub -m b -M john.smith@example.com fibo.pbs\n

will send an e-mail to john.smith@example.com when the job begins.

"}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

So the following example might go wrong:

$ qsub job1.sh\n$ qsub job2.sh\n

You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

$ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

afterok means \"After OK\", or in other words, after the first job successfully completed.

It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

  1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

"}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

The syntax for qsub for submitting an interactive PBS job is:

$ qsub -I <... pbs directives ...>\n
"}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

Tip

Find the code in \"~/examples/Running_interactive_jobs\"

First of all, in order to know on which computer you're working, enter:

$ hostname -f\nln2.leibniz.uantwerpen.vsc\n

This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

The most basic way to start an interactive job is the following:

$ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

There are two things of note here.

  1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

  2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

In order to know on which compute-node you're working, enter again:

$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

The computer \"r1c02cn3\" stands for:

  1. \"r5\" is rack #5.

  2. \"c3\" is enclosure/chassis #3.

  3. \"cn08\" is compute node #08.

With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

$ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

You can exit the interactive session with:

$ exit\n

Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

You can work for 3 hours by:

qsub -I -l walltime=03:00:00\n

If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

"}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

"}, {"location": "running_interactive_jobs/#install-xming", "title": "Install Xming", "text": "

The first task is to install the Xming software.

  1. Download the Xming installer from the following address: http://www.straightrunning.com/XmingNotes/. Either download Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.

  2. Run the Xming setup program on your Windows desktop.

  3. Keep the proposed default folders for the Xming installation.

  4. When selecting the components that need to be installed, make sure to select \"XLaunch wizard\" and \"Normal PuTTY Link SSH client\".

  5. We suggest to create a Desktop icon for Xming and XLaunch.

  6. And Install.

And now we can run Xming:

  1. Select XLaunch from the Start Menu or by double-clicking the Desktop icon.

  2. Select Multiple Windows. This will open each application in a separate window.

  3. Select Start no client to make XLaunch wait for other programs (such as PuTTY).

  4. Select Clipboard to share the clipboard.

  5. Finally Save configuration into a file. You can keep the default filename and save it in your Xming installation directory.

  6. Now Xming is running in the background ... and you can launch a graphical application in your PuTTY terminal.

  7. Open a PuTTY terminal and connect to the HPC.

  8. In order to test the X-server, run \"xclock\". \"xclock\" is the standard GUI clock for the X Window System.

xclock\n

You should see the XWindow clock application appearing on your Windows machine. The \"xclock\" application runs on the login-node of the UAntwerpen-HPC, but is displayed on your Windows machine.

You can close your clock and connect further to a compute node with again your X-forwarding enabled:

$ qsub -I -X\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n$ xclock\n

and you should see your clock again.

"}, {"location": "running_interactive_jobs/#ssh-tunnel", "title": "SSH Tunnel", "text": "

In order to work in client/server mode, it is often required to establish an SSH tunnel between your Windows desktop machine and the compute node your job is running on. PuTTY must have been installed on your computer, and you should be able to connect via SSH to the HPC cluster's login node.

Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunnelling.

There are several cases where this is useful:

  1. Running graphical applications on the cluster: The graphical program cannot directly communicate with the X Window server on your local system. In this case, the tunnelling is easy to set up as PuTTY will do it for you if you select the right options on the X11 settings page as explained on the page about text-mode access using PuTTY.

  2. Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualisation mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. This scenario is explained on this page.

  3. Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.

Procedure: A tunnel from a local client to a specific computer node on the cluster

  1. Log in on the login node via PuTTY.

  2. Start the server job, note the compute node's name the job is running on (e.g., r1c02cn3.leibniz.antwerpen.vsc), as well as the port the server is listening on (e.g., \"54321\").

  3. Set up the tunnel:

    1. Close your current PuTTY session.

    2. In the \"Category\" pane, expand Connection>SSh, and select as show below:

    3. In the Source port field, enter the local port to use (e.g., 5555).

    4. In the Destination field, enter : (e.g., r1c02cn3.leibniz.antwerpen.vsc:54321 as in the example above, these are the details you noted in the second step).

    5. Click the Add button.

    6. Click the Open button

    7. The tunnel is now ready to use.

      "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

      We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

      Now run the message program:

      cd ~/examples/Running_interactive_jobs\n./message.py\n

      You should see the following message appearing.

      Click any button and see what happens.

      -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
      "}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

      In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

      ::: prompt :::

      And start the MATLAB interactive environment:

      ::: prompt :::

      And start the fibo2.m program in the command window:

      ::: prompt fx >> :::

      ::: center :::

      And see the displayed calculations, ...

      ::: center :::

      as well as the nice \"plot\" appearing:

      ::: center :::

      You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

      ::: prompt fx >> :::

      "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

      You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

      "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

      First go to the directory:

      cd ~/examples/Running_jobs_with_input_output_data\n

      Note

      If the example directory is not yet present, copy it to your home directory:

      ```

      cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

      List and check the contents with:

      $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

      Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

      file1.py
      #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

      The code of the Python script, is self explanatory:

      1. In step 1, we write something to the file hello.txt in the current directory.

      2. In step 2, we write some text to stdout.

      3. In step 3, we write to stderr.

      Check the contents of the first job script:

      file1a.pbs
      #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

      You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

      Submit it:

      qsub file1a.pbs\n

      After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

      $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

      Some observations:

      1. The file Hello.txt was created in the current directory.

      2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

      3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

      Inspect their contents ...\u00a0and remove the files

      $ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

      Tip

      Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

      "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

      Check the contents of the job script and execute it.

      file1b.pbs
      #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

      Inspect the contents again ...\u00a0and remove the generated files:

      $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

      Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

      "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

      You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

      file1c.pbs
      #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
      "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

      The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

      "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

      Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

      The following locations are available:

      Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

      Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

      We elaborate more on the specific function of these locations in the following sections.

      "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

      Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

      The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

      The operating system also creates a few files and folders here to manage your account. Examples are:

      File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

      Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

      "}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

      In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

      The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

      "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

      To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

      You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

      Each type of scratch has its own use:

      Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

      Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

      At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

      Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

      Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

      "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

      Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

      The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

      With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

      ::: prompt ---------------------------------------------------------- Your quota is:

      Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

      File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

      :::

      Make sure to regularly check these numbers at log-in!

      The rules are:

      1. You will only receive a warning when you have reached the soft limit of either quota.

      2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

      3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

      We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

      "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

      Tip

      Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

      In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

      1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

      2. repeat this action 30.000 times;

      3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

      Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

      $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
      "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

      Tip

      Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

      In this exercise, you will

      1. Generate the file \"primes_1.txt\" again as in the previous exercise;

      2. open the file;

      3. read it line by line;

      4. calculate the average of primes in the line;

      5. count the number of primes found per line;

      6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

      Check the Python and the PBS file, and submit the job:

      $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
      "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

      The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

      • the amount of disk space; and

      • the number of files

      that can be made available to each individual UAntwerpen-HPC user.

      The quota of disk space and number of files for each UAntwerpen-HPC user is:

      HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

      Tip

      The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

      Tip

      "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

      The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

      $ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

      or on the UAntwerp clusters

      $ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

      With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

      Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

      $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

      This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

      If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

      $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

      If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

      $ du -s\n5632 .\n$ du -s -h\n

      If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

      $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

      Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

      $ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

      We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

      Try:

      $ tree -s -d\n

      However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

      "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

      Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

      Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

      To change the group of a directory and it's underlying directories and files, you can use:

      chgrp -R groupname directory\n
      "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
      1. Get the group name you want to belong to.

      2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

      3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

      "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
      1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

      2. Fill out the group name. This cannot contain spaces.

      3. Put a description of your group in the \"Info\" field.

      4. You will now be a member and moderator of your newly created group.

      "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

      Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

      "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

      You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

      $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

      We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

      "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

      A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

      "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

      This section will explain how to create, activate, use and deactivate Python virtual environments.

      "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

      A Python virtual environment can be created with the following command:

      python -m venv myenv      # Create a new virtual environment named 'myenv'\n

      This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

      Warning

      When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

      "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

      To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

      source myenv/bin/activate                    # Activate the virtual environment\n
      "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

      After activating the virtual environment, you can install additional Python packages with pip install:

      pip install example_package1\npip install example_package2\n

      These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

      It is now possible to run Python scripts that use the installed packages in the virtual environment.

      Tip

      When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

      Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

      To check if a package is available as a module, use:

      module av package_name\n

      Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

      module show module_name\n

      to check which extensions are included in a module (if any).

      "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

      Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

      example.py
      import example_package1\nimport example_package2\n...\n
      python example.py\n
      "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

      When you are done using the virtual environment, you can deactivate it. To do that, run:

      deactivate\n
      "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

      You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

      pytorch_poutyne.py
      import torch\nimport poutyne\n\n...\n

      We load a PyTorch package as a module and install Poutyne in a virtual environment:

      module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

      While the virtual environment is activated, we can run the script without any issues:

      python pytorch_poutyne.py\n

      Deactivate the virtual environment when you are done:

      deactivate\n
      "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

      To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

      module swap cluster/donphan\nqsub -I\n

      After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

      Naming a virtual environment

      When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

      python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
      "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

      This section will combine the concepts discussed in the previous sections to:

      1. Create a virtual environment on a specific cluster.
      2. Combine packages installed in the virtual environment with modules.
      3. Submit a job script that uses the virtual environment.

      The example script that we will run is the following:

      pytorch_poutyne.py
      import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

      First, we create a virtual environment on the donphan cluster:

      module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

      Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

      jobscript.pbs
      #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

      Next, we submit the job script:

      qsub jobscript.pbs\n

      Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

      "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

      Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

      For example, if we create a virtual environment on the skitty cluster,

      module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

      return to the login node by pressing CTRL+D and try to use the virtual environment:

      $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

      we are presented with the illegal instruction error. More info on this here

      "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

      When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

      python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

      Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

      "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

      There are two main reasons why this error could occur.

      1. You have not loaded the Python module that was used to create the virtual environment.
      2. You loaded or unloaded modules while the virtual environment was activated.
      "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

      If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

      The following commands illustrate this issue:

      $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

      Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

      module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
      "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

      You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

      $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

      The solution is to only modify modules when not in a virtual environment.

      "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

      Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

      One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

      For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

      This documentation only covers aspects of using Singularity on the infrastructure.

      "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

      Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

      The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

      In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

      If these limitations are a problem for you, please let us know via .

      "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

      All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

      "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

      Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

      When you make Singularity or convert Docker images you have some restrictions:

      • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
      "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

      For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

      We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

      "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

      Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

      ::: prompt :::

      Create a job script like:

      Create an example myscript.sh:

      ::: code bash #!/bin/bash

      # prime factors factor 1234567 :::

      "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

      We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

      Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

      ::: prompt :::

      You can download linear_regression.py from the official Tensorflow repository.

      "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

      It is also possible to execute MPI jobs within a container, but the following requirements apply:

      • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

      • Use modules within the container (install the environment-modules or lmod package in your container)

      • Load the required module(s) before singularity execution.

      • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

      Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

      ::: prompt :::

      For example to compile an MPI example:

      ::: prompt :::

      Example MPI job script:

      "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

      The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

      As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

      In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

      In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

      • Title and nickname
      • Start and end date for your course or training
      • VSC-ids of all teachers/trainers
      • Participants based on UGent Course Code and/or list of VSC-ids
      • Optional information
        • Additional storage requirements
          • Shared folder
          • Groups folder for collaboration
          • Quota
        • Reservation for resource requirements beyond the interactive cluster
        • Ticket number for specific software needed for your course/training
        • Details for a custom Interactive Application in the webportal

      In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

      Please make these requests well in advance, several weeks before the start of your course/workshop.

      "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

      The title of the course or training can be used in e.g. reporting.

      The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

      When choosing the nickname, try to make it unique, but this is not enforced nor checked.

      "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

      The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

      The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

      • Course group and subgroups will be deactivated
      • Residual data in the course directories will be archived or deleted
      • Custom Interactive Applications will be disabled
      "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

      A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

      This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

      Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

      "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

      The management of the list of students or participants depends if this is a UGent course or a training/workshop.

      "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

      Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

      The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

      Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

      A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

      "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

      (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

      "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

      For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

      This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

      Every course directory will always contain the folders:

      • input
        • ideally suited to distribute input data such as common datasets
        • moderators have read/write access
        • group members (students) only have read access
      • members
        • this directory contains a personal folder for every student in your course members/vsc<01234>
        • only this specific VSC-id will have read/write access to this folder
        • moderators have read access to this folder
      "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

      Optionally, we can also create these folders:

      • shared
        • this is a folder for sharing files between any and all group members
        • all group members and moderators have read/write access
        • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
      • groups
        • a number of groups/group_<01> folders are created under the groups folder
        • these folders are suitable if you want to let your students collaborate closely in smaller groups
        • each of these group_<01> folders are owned by a dedicated group
        • teachers are automatically made moderators of these dedicated groups
        • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
        • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

      If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

      • shared: yes
      • subgroups: <number of (sub)groups>
      "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

      There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

      • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
      • member quota (defaults 5 GB volume and 10k files) are per student/participant

      The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

      "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

      The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

      "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

      We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

      Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

      Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

      Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

      "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

      In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

      We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

      Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

      "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

      HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

      A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

      If you would like this for your course, provide more details in your teaching request, including:

      • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

      • which cluster you want to use

      • how many nodes/cores/GPUs are needed

      • which software modules you are loading

      • custom code you are launching (e.g. autostart a GUI)

      • required environment variables that you are setting

      • ...

      We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

      A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

      "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

      Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

      "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

      Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

      "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

      Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

      "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

      Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

      For example:

      $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

      "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

      Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

      See also the examples below.

      "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

      Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

      See also the examples below.

      "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

      The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

      example.sh:

      #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
      "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

      Running the following command:

      $ qsub --dryrun example.sh -N example\n

      will generate this output:

      Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
      This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

      With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

      "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

      Similarly to the --dryrun example, we start by running the following command:

      $ qsub --debug example.sh -N example\n

      which generates this output:

      DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
      The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

      "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

      Below is a list of the most common and useful directives.

      Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

      TORQUE-related environment variables in batch job scripts.

      # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

      IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

      When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

      Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

      Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

      "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

      When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

      To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

      Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

      Other reasons why using more cores may not lead to a (significant) speedup include:

      • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

      • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

      • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

      • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

      • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

      • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

      More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

      "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

      When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

      Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

      Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

      Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

      An example of how you can make beneficial use of multiple nodes can be found here.

      You can also use MPI in Python, some useful packages that are also available on the HPC are:

      • mpi4py
      • Boost.MPI

      We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

      "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

      If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

      If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

      "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

      If you get from your job output an error message similar to this:

      =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

      This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

      "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

      Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

      "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

      If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

      If you have errors that look like:

      vsc20167@login.hpc.uantwerpen.be: Permission denied\n

      or you are experiencing problems with connecting, here is a list of things to do that should help:

      1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

      2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

      3. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

      4. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

      5. Make sure you are using the private key (not the public key) when trying to connect: If you followed the manual, the private key filename should end in .ppk (not in .pub).

      6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

      7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

      If you are using PuTTY and get this error message:

      server unexpectedly closed network connection\n

      it is possible that the PuTTY version you are using is too old and doesn't support some required (security-related) features.

      Make sure you are using the latest PuTTY version if you are encountering problems connecting (see Get PuTTY). If that doesn't help, please contact hpc@uantwerpen.be.

      If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

      Please create a log file of your SSH session by following the steps in this article and include it in the email.

      "}, {"location": "troubleshooting/#change-putty-private-key-for-a-saved-configuration", "title": "Change PuTTY private key for a saved configuration", "text": "
      1. Open PuTTY

      2. Single click on the saved configuration

      3. Then click Load button

      4. Expand SSH category (on the left panel) clicking on the \"+\" next to SSH

      5. Click on Auth under the SSH category

      6. On the right panel, click Browse button

      7. Then search your private key on your computer (with the extension \".ppk\")

      8. Go back to the top of category, and click Session

      9. On the right panel, click on Save button

      "}, {"location": "troubleshooting/#check-whether-your-private-key-in-putty-matches-the-public-key-on-the-accountpage", "title": "Check whether your private key in PuTTY matches the public key on the accountpage", "text": "

      Follow the instructions in Change PuTTY private key for a saved configuration util item 5, then:

      1. Single click on the textbox containing the path to your private key, then select all text (push Ctrl + a ), then copy the location of the private key (push Ctrl + c)

      2. Open PuTTYgen

      3. Enter menu item \"File\" and select \"Load Private key\"

      4. On the \"Load private key\" popup, click in the textbox next to \"File name:\", then paste the location of your private key (push Ctrl + v), then click Open

      5. Make sure that your Public key from the \"Public key for pasting into OpenSSH authorized_keys file\" textbox is in your \"Public keys\" section on the accountpage https://account.vscentrum.be. (Scroll down to the bottom of \"View Account\" tab, you will find there the \"Public keys\" section)

      "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

      If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

      You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

      - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

      Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

      If the fingerprint matches, click \"Yes\".

      If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@uantwerpen.be.

      Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

      "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

      If you get errors like:

      $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

      or

      sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

      It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

      "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

      If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

      The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

      Make sure the fingerprint in the alert matches one of the following:

      - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

      If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

      Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

      "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

      To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

      Note

      Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

      "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

      If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

      Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

      You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

      "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

      See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

      "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

      All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

      vsc20167@ln01[203] $\n

      When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

      Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

      Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

      $ echo This is a test\nThis is a test\n

      Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

      More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

      $ ls --help \n$ man ls\n$ info ls\n

      (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

      "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

      In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

      Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

      Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

      Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

      echo \"Hello! This is my hostname:\" \nhostname\n

      You can type both lines at your shell prompt, and the result will be the following:

      $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

      Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

      $ vi foo\n

      or use the following commands:

      echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

      The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

      $ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

      Congratulations, you just created and started your first shell script!

      A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

      You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

      $ which bash\n/bin/bash\n

      We edit our script and change it with this information:

      #!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

      Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

      Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

      chmod +x foo\n

      Now you can start your script by simply executing it:

      $ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

      The same technique can be used for all other scripting languages, like Perl and Python.

      Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

      "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

      The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

      Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

      To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

      Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

      Through this web portal, you can:

      • browse through the files & directories in your VSC account, and inspect, manage or change them;

      • consult active jobs (across all HPC-UGent Tier-2 clusters);

      • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

      • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

      • open a terminal session directly in your web browser;

      More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

      "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

      All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

      "}, {"location": "web_portal/#login", "title": "Login", "text": "

      When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

      "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

      The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

      Please click \"Authorize\" here.

      This request will only be made once, you should not see this again afterwards.

      "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

      Once logged in, you should see this start page:

      This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

      If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

      "}, {"location": "web_portal/#features", "title": "Features", "text": "

      We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

      "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

      Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

      The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

      Here you can:

      • Click a directory in the tree view on the left to open it;

      • Use the buttons on the top to:

        • go to a specific subdirectory by typing in the path (via Go To...);

        • open the current directory in a terminal (shell) session (via Open in Terminal);

        • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

        • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

        • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

        • show the owner and permissions in the file listing (via Show Owner/Mode);

      • Double-click a directory in the file listing to open that directory;

      • Select one or more files and/or directories in the file listing, and:

        • use the View button to see the contents (use the button at the top right to close the resulting popup window);

        • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

        • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

        • use the Download button to download the selected files and directories from your VSC account to your local workstation;

        • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

        • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

        • use the Delete button to (permanently!) remove the selected files and directories;

      For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

      "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

      Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

      For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

      "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

      To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

      A new browser tab will be opened that shows all your current queued and/or running jobs:

      You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

      Jobs that are still queued or running can be deleted using the red button on the right.

      Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

      For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

      "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

      To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

      This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

      You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

      Don't forget to actually submit your job to the system via the green Submit button!

      "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

      In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

      "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

      Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

      Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

      To exit the shell session, type exit followed by Enter and then close the browser tab.

      Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

      "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

      To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

      You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

      Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

      To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

      "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

      See dedicated page on Jupyter notebooks

      "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

      In case of problems with the web portal, it could help to restart the web server running in your VSC account.

      You can do this via the Restart Web Server button under the Help menu item:

      Of course, this only affects your own web portal session (not those of others).

      "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
      • ABAQUS for CAE course
      "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

      X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

      1. A graphical remote desktop that works well over low bandwidth connections.

      2. Copy/paste support from client to server and vice-versa.

      3. File sharing from client to server.

      4. Support for sound.

      5. Printer sharing from client to server.

      6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

      "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

      X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

      X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

      "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

      After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

      There are two ways to connect to the login node:

      • Option A: A direct connection to \"login.hpc.uantwerpen.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

      • Option B: You can use the node \"login.hpc.uantwerpen.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

      "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

      This is the easier way to setup X2Go, a direct connection to the login node.

      1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

      2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

      3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

      4. Set the SSH port (22 by default).

      5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

        1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

          You should look for your private SSH key generated by puttygen exported in \"OpenSSH\" format in Generating a public/private key pair (by default \"id_rsa\" (and not the \".ppk\" version)). Choose that file and click on open .

      6. Check \"Try autologin\" option.

      7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

        1. [optional]: Set a single application like Terminal instead of XFCE desktop. This option is much better than PuTTY because the X2Go client includes copy-pasting support.

      8. [optional]: Change the session icon.

      9. Click the OK button after these changes.

      "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

      This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

      1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

      2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

      3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

        1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

        2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

        3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

        4. Click the OK button after these changes.

      "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

      Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

      X2Go will keep the session open for you (but only if the login node is not rebooted).

      "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

      If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

      hostname\n

      This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

      "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

      If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

      "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

      The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

      To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

      Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

      After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

      Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

      "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

      TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

      Loads MNIST datasets and trains a neural network to recognize hand-written digits.

      Runtime: ~1 min. on 8 cores (Intel Skylake)

      See https://www.tensorflow.org/tutorials/quickstart/beginner

      "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

      Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

      These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

      The guide aims to make you familiar with the Linux command line environment quickly.

      The tutorial goes through the following steps:

      1. Getting Started
      2. Navigating
      3. Manipulating files and directories
      4. Uploading files
      5. Beyond the basics

      Do not forget Common pitfalls, as this can save you some troubleshooting.

      "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
      • More on the HPC infrastructure.
      • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
      "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

      Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

      "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

      To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

      First, it's important to make a distinction between two different output channels:

      1. stdout: standard output channel, for regular output

      2. stderr: standard error channel, for errors and warnings

      "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

      > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

      $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

      >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

      $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

      < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

      One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

      $ find . -name .txt > files\n$ xargs grep banana < files\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

      To redirect the stderr output (warnings, messages), you can use 2>, just like >

      $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

      To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

      $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

      Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

      $ ls | wc -l\n    42\n

      A common pattern is to pipe the output of a command to less so you can examine or search the output:

      $ find . | less\n

      Or to look through your command history:

      $ history | less\n

      You can put multiple pipes in the same line. For example, which cp commands have we run?

      $ history | grep cp | less\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

      The shell will expand certain things, including:

      1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

      2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

      3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

      4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

      "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

      ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

      $ ps -fu $USER\n

      To see all the processes:

      $ ps -elf\n

      To see all the processes in a forest view, use:

      $ ps auxf\n

      The last two will spit out a lot of data, so get in the habit of piping it to less.

      pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

      pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

      "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

      ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

      $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

      Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

      $ kill -9 1234\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

      top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

      To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

      There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

      To exit top, use q (for 'quit').

      For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

      "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

      ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

      $ ulimit -a\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

      To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

      $ wc example.txt\n      90     468     3189   example.txt\n

      The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

      To only count the number of lines, use wc -l:

      $ wc -l example.txt\n      90    example.txt\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

      grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

      $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

      grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

      "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

      cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

      $ cut -f 1 -d ',' mydata.csv\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

      sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

      $ sed 's/oldtext/newtext/g' myfile.txt\n

      By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

      "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

      awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

      First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

      $ awk '{print $4}' mydata.dat\n

      You can use -F ':' to change the delimiter (F for field separator).

      The next example is used to sum numbers from a field:

      $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

      The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

      However, there are some rules you need to abide by.

      Here is a very detailed guide should you need more information.

      "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

      The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

      #!/bin/sh\n
      #!/bin/bash\n
      #!/usr/bin/env bash\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

      Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

      if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
      Or only if a certain variable is bigger than one:
      if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
      Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

      In the initial example, we used -d to test if a directory existed. There are several more checks.

      Another useful example, is to test if a variable contains a value (so it's not empty):

      if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

      the -z will check if the length of the variable's value is greater than zero.

      "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

      Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

      Let's look at a simple example:

      for i in 1 2 3\ndo\necho $i\ndone\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

      Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

      CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

      In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

      "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

      Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

      Firstly a useful thing to know for debugging and testing is that you can run any command like this:

      command 2>&1 output.log   # one single output file, both output and errors\n

      If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

      If you want regular and error output separated you can use:

      command > output.log 2> output.err  # errors in a separate file\n

      this will write regular output to output.log and error output to output.err.

      You can then look for the errors with less or search for specific text with grep.

      In scripts, you can use:

      set -e\n

      This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

      "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

      Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

      command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

      "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

      If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

      Examples include:

      • modifying your $PS1 (to tweak your shell prompt)

      • printing information about the current/jobs environment (echoing environment variables, etc.)

      • selecting a specific cluster to run on with module swap cluster/...

      Some recommendations:

      • Avoid using module load statements in your $HOME/.bashrc file

      • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

      "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

      When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

      "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
      #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
      "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

      The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

      This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

      #PBS -l nodes=1:ppn=1 # single-core\n

      For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

      #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

      We intend to submit it on the long queue:

      #PBS -q long\n

      We request a total running time of 48 hours (2 days).

      #PBS -l walltime=48:00:00\n

      We specify a desired name of our job:

      #PBS -N FreeSurfer_per_subject-time-longitudinal\n
      This specifies mail options:
      #PBS -m abe\n

      1. a means mail is sent when the job is aborted.

      2. b means mail is sent when the job begins.

      3. e means mail is sent when the job ends.

      Joins error output with regular output:

      #PBS -j oe\n

      All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

      "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
      1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

      2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

      3. How many files and directories are in /tmp?

      4. What's the name of the 5th file/directory in alphabetical order in /tmp?

      5. List all files that start with t in /tmp.

      6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

      7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

      "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

      This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

      "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

      If you receive an error message which contains something like the following:

      No such file or directory\n

      It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

      Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

      "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

      Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

      $ cat some file\nNo such file or directory 'some'\n

      Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

      $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

      This is especially error-prone if you are piping results of find:

      $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

      This can be worked around using the -print0 flag:

      $ find . -type f -print0 | xargs -0 cat\n...\n

      But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

      "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

      If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

      $ rm -r ~/$PROJETC/*\n

      "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

      A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

      $ #rm -r ~/$POROJETC/*\n
      Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

      "}, {"location": "linux-tutorial/common_pitfalls/#copying-files-with-winscp", "title": "Copying files with WinSCP", "text": "

      After copying files from a windows machine, a file might look funny when looking at it on the cluster.

      $ cat script.sh\n#!/bin/bash^M\n#PBS -l nodes^M\n...\n

      Or you can get errors like:

      $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

      See section dos2unix to fix these errors with dos2unix.

      "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
      $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

      Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

      $ chmod +x script_name.sh\n

      "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

      If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

      If you need help about a certain command, you should consult its so-called \"man page\":

      $ man command\n

      This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

      Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

      "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
      1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

      2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

      3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

      4. basic shell usage

      5. Bash for beginners

      6. MOOC

      Please don't hesitate to contact in case of questions or problems.

      "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

      To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

      You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

      Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

      "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

      To get help:

      1. use the documentation available on the system, through the help, info and man commands (use q to exit).
        help cd \ninfo ls \nman cp \n
      2. use Google

      3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

      "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

      Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

      "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

      The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

      You use the shell by executing commands, and hitting <enter>. For example:

      $ echo hello \nhello \n

      You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

      To go through previous commands, use <up> and <down>, rather than retyping them.

      "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

      A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

      $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

      "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

      If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

      "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

      At the prompt we also have access to shell variables, which have both a name and a value.

      They can be thought of as placeholders for things we need to remember.

      For example, to print the path to your home directory, we can use the shell variable named HOME:

      $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

      This prints the value of this variable.

      "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

      There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

      For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

      $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

      You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

      $ env | sort | grep VSC\n

      But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

      $ export MYVARIABLE=\"value\"\n

      It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

      If we then do

      $ echo $MYVARIABLE\n

      this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

      "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

      You can change what your prompt looks like by redefining the special-purpose variable $PS1.

      For example: to include the current location in your prompt:

      $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

      Note that ~ is short representation of your home directory.

      To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

      $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

      "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

      One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

      This may lead to surprising results, for example:

      $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

      To understand what's going on here, see the section on cd below.

      The moral here is: be very careful to not use empty variables unintentionally.

      Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

      The -e option will result in the script getting stopped if any command fails.

      The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

      More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

      "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

      If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

      "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

      Basic information about the system you are logged into can be obtained in a variety of ways.

      We limit ourselves to determining the hostname:

      $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

      And querying some basic information about the Linux kernel:

      $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

      "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
      • Print the full path to your home directory
      • Determine the name of the environment variable to your personal scratch directory
      • What's the name of the system you\\'re logged into? Is it the same for everyone?
      • Figure out how to print the value of a variable without including a newline
      • How do you get help on using the man command?

      Next chapter teaches you on how to navigate.

      "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

      Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

      "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

      Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

      To figure out where your quota is being spent, the du (isk sage) command can come in useful:

      $ du -sh test\n59M test\n

      Do not (frequently) run du on directories where large amounts of data are stored, since that will:

      1. take a long time

      2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

      "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

      Software is provided through so-called environment modules.

      The most commonly used commands are:

      1. module avail: show all available modules

      2. module avail <software name>: show available modules for a specific software name

      3. module list: show list of loaded modules

      4. module load <module name>: load a particular module

      More information is available in section Modules.

      "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

      The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

      Detailed information is available in section submitting your job.

      "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

      Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

      Hint: python -c \"print(sum(range(1, 101)))\"

      • How many modules are available for Python version 3.6.4?
      • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
      • Which cluster modules are available?

      • What's the full path to your personal home/data/scratch directories?

      • Determine how large your personal directories are.
      • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
      "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

      Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

      To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

      $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

      To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
      $ cp source target\n

      This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

      $ cp -r sourceDirectory target\n

      A last more complicated example:

      $ cp -a sourceDirectory target\n

      Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
      $ mkdir directory\n

      which will create a directory with the given name inside the current directory.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
      $ mv source target\n

      mv will move the source path to the destination path. Works for both directories as files.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

      Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

      $ rm filename\n
      rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

      You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

      $ rmdir directory\n

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

      Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

      1. User - a particular user (account)

      2. Group - a particular group of users (may be user-specific group with only one member)

      3. Other - other users in the system

      The permission types are:

      1. Read - For files, this gives permission to read the contents of a file

      2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

      3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

      Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

      $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

      Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

      The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

      Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

      $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

      The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

      You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

      You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

      However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

      $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

      This will give the user otheruser permissions to write to Project_GoldenDragon

      Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

      Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

      See https://linux.die.net/man/1/setfacl for more information.

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

      Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

      $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

      Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

      $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

      Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

      $ unzip myfile.zip\n

      If we would like to make our own zip archive, we use zip:

      $ zip myfiles.zip myfile1 myfile2 myfile3\n

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

      Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

      You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

      $ tar -xf tarfile.tar\n

      Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

      $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

      Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

      # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

      If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

      $ tar -c source1 source2 source3 -f tarfile.tar\n
      "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
      1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

      2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

      3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

      4. Remove the another/test directory with a single command.

      5. Rename test to test2. Move test2/hostname.txt to your home directory.

      6. Change the permission of test2 so only you can access it.

      7. Create an empty job script named job.sh, and make it executable.

      8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

      The next chapter is on uploading files, especially important when using HPC-infrastructure.

      "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

      This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

      "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

      To print the current directory, use pwd or \\$PWD:

      $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

      "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

      A very basic and commonly used command is ls, which can be used to list files and directories.

      In its basic usage, it just prints the names of files and directories in the current directory. For example:

      $ ls\nafile.txt some_directory \n

      When provided an argument, it can be used to list the contents of a directory:

      $ ls some_directory \none.txt two.txt\n

      A couple of commonly used options include:

      • detailed listing using ls -l:

        $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
      • To print the size information in human-readable form, use the -h flag:

        $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
      • also listing hidden files using the -a flag:

        $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
      • ordering files by the most recent change using -rt:

        $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

      If you try to use ls on a file that doesn't exist, you will get a clear error message:

      $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
      "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

      To change to a different directory, you can use the cd command:

      $ cd some_directory\n

      To change back to the previous directory you were in, there's a shortcut: cd -

      Using cd without an argument results in returning back to your home directory:

      $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

      "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

      The file command can be used to inspect what type of file you're dealing with:

      $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
      "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

      An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

      Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

      A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

      Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

      There are two special relative paths worth mentioning:

      • . is a shorthand for the current directory
      • .. is a shorthand for the parent of the current directory

      You can also use .. when constructing relative paths, for example:

      $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
      "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

      Each file and directory has particular permissions set on it, which can be queried using ls -l.

      For example:

      $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

      The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

      1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
      2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
      3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
      4. the 3rd part r-- indicates that other users only have read permissions

      The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

      1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
      2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

      See also the chmod command later in this manual.

      "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

      find will crawl a series of directories and lists files matching given criteria.

      For example, to look for the file named one.txt:

      $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

      To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

      $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

      A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

      "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
      • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
      • When was your home directory created or last changed?
      • Determine the name of the last changed file in /tmp.
      • See how home directories are organised. Can you access the home directory of other users?

      The next chapter will teach you how to interact with files and directories.

      "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

      To transfer files from and to the HPC, see the section about transferring files of the HPC manual

      "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

      After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

      For example, you may see an error when submitting a job script that was edited on Windows:

      sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

      To fix this problem, you should run the dos2unix command on the file:

      $ dos2unix filename\n
      "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

      As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop and they look like directories in WinSCP) pointing to the respective storages:

      $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
      "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

      Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

      1. Open (\"Read\"): ^R

      2. Save (\"Write Out\"): ^O

      3. Exit: ^X

      More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

      "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

      rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

      You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

      For example, to copy a folder with lots of CSV files:

      $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

      will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

      The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

      To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

      To copy files to your local computer, you can also use rsync:

      $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
      This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

      See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

      "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
      1. Download the file /etc/hostname to your local computer.

      2. Upload a file to a subdirectory of your personal $VSC_DATA space.

      3. Create a file named hello.txt and edit it using nano.

      Now you have a basic understanding, see next chapter for some more in depth concepts.

      "}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
      $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

      Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

      $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

      As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

      "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
      $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

      Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

      $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

      As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

      "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
      module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

      Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

      module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
      "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

      (more info soon)

      "}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

      Use the menu on the left to navigate, or use the search box on the top right.

      You are viewing documentation intended for people using Windows.

      Use the OS dropdown in the top bar to switch to a different operating system.

      Quick links

      • Getting Started | Getting Access
      • FAQ | Troubleshooting | Best practices | Known issues

      If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

      If you still have any questions, you can contact the UAntwerpen-HPC.

      "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

      An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

      See also: Running batch jobs.

      "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

      When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

      Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

      Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

      "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

      Modules each come with a suffix that describes the toolchain used to install them.

      Examples:

      • AlphaFold/2.2.2-foss-2021a

      • tqdm/4.61.2-GCCcore-10.3.0

      • Python/3.9.5-GCCcore-10.3.0

      • matplotlib/3.4.2-foss-2021a

      Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

      The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

      You can use module avail [search_text] to see which versions on which toolchains are available to use.

      It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

      "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

      When incompatible modules are loaded, you might encounter an error like this:

      {{ lmod_error }}\n

      You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

      Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

      An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

      See also: How do I choose the job modules?

      "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

      The 72 hour walltime limit will not be extended. However, you can work around this barrier:

      • Check that all available resources are being used. See also:
        • How many cores/nodes should I request?.
        • My job is slow.
        • My job isn't using any GPUs.
      • Use a faster cluster.
      • Divide the job into more parallel processes.
      • Divide the job into shorter processes, which you can submit as separate jobs.
      • Use the built-in checkpointing of your software.
      "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

      Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

      When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

      Try requesting a bit more memory than your proportional share, and see if that solves the issue.

      See also: Specifying memory requirements.

      "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

      When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

      It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

      See also: Running interactive jobs.

      "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

      Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

      Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

      "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

      There are a few possible causes why a job can perform worse than expected.

      Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

      Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

      Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

      "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

      Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

      To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

      See also: Multi core jobs/Parallel Computing and Mympirun.

      "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

      For example, we have a simple script (./hello.sh):

      #!/bin/bash \necho \"hello world\"\n

      And we run it like mympirun ./hello.sh --output output.txt.

      To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

      mympirun --output output.txt ./hello.sh\n
      "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

      In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

      "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

      When trying to create files, errors like this can occur:

      No space left on device\n

      The error \"No space left on device\" can mean two different things:

      • all available storage quota on the file system in question has been used;
      • the inode limit has been reached on that file system.

      An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

      Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

      If the problem persists, feel free to contact support.

      "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

      NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

      See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

      "}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

      Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

      $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

      For more information about chmod or setfacl, see Linux tutorial.

      "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

      Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

      "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

      Please send an e-mail to hpc@uantwerpen.be that includes:

      • What software you want to install and the required version

      • Detailed installation instructions

      • The purpose for which you want to install the software

      If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

      "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

      On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

      MacOS & Linux (on Windows, only the second part is shown):

      @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

      Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

      "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

      A Virtual Organisation consists of a number of members and moderators. A moderator can:

      • Manage the VO members (but can't access/remove their data on the system).

      • See how much storage each member has used, and set limits per member.

      • Request additional storage for the VO.

      One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

      See also: Virtual Organisations.

      "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

      Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

      du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

      The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

      The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

      "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

      By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

      You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

      "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

      When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

      sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

      A lot of tasks can be performed without sudo, including installing software in your own account.

      Installing software

      • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
      • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
      "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

      Who can I contact?

      • General questions regarding HPC-UGent and VSC: hpc@ugent.be

      • HPC-UGent Tier-2: hpc@ugent.be

      • VSC Tier-1 compute: compute@vscentrum.be

      • VSC Tier-1 cloud: cloud@vscentrum.be

      "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

      Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

      "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

      The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

      "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

      Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

      module load hod\n
      "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

      The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

      As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

      For example, this will work as expected:

      $ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

      Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

      "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

      The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

      $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

      By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

      If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

      Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

      "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

      After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

      These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

      You should occasionally clean this up using hod clean:

      $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
      Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

      "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

      If you have any questions, or are experiencing problems using HOD, you have a couple of options:

      • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

      • Contact the UAntwerpen-HPC via hpc@uantwerpen.be

      • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

      "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

      Note

      To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

      Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

      "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

      The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

      Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

      "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

      Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

      To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

      $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

      After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

      To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

      First, we copy the magicsquare.m example that comes with MATLAB to example.m:

      cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

      To compile a MATLAB program, use mcc -mv:

      mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
      "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

      To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

      It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

      For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

      "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

      If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

      export _JAVA_OPTIONS=\"-Xmx64M\"\n

      The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

      Another possible issue is that the heap size is too small. This could result in errors like:

      Error: Out of memory\n

      A possible solution to this is by setting the maximum heap size to be bigger:

      export _JAVA_OPTIONS=\"-Xmx512M\"\n
      "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

      MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

      The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

      You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

      parpool.m
      % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

      See also the parpool documentation.

      "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

      Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

      MATLAB_LOG_DIR=<OUTPUT_DIR>\n

      where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

      # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

      You should remove the directory at the end of your job script:

      rm -rf $MATLAB_LOG_DIR\n
      "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

      When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

      The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

      export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

      So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

      "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

      All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

      jobscript.sh
      #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
      "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

      Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

      Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

      "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

      First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

      $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

      When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

      Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

      It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

      "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

      You can get a list of running VNC servers on a node with

      $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

      This only displays the running VNC servers on the login node you run the command on.

      To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

      $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

      This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

      "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

      The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

      In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

      Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

      To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

      The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

      "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

      The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

      The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

      So, in our running example, both the source and destination ports are 5906.

      "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

      In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

      If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

      In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

      To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

      Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

      In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

      We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

      "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

      First, we will set up the SSH tunnel from our workstation to .

      Use the settings specified in the sections above:

      • source port: the port on which the VNC server is running (see Determining the source/destination port);

      • destination host: localhost;

      • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

      See for detailed information on how to configure PuTTY to set up the SSH tunnel, by entering the settings in the and fields in SSH tunnel.

      With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

      Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

      "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

      Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

      You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

      netstat -an | grep -i listen | grep tcp | grep 12345\n

      If you see no matching lines, then the port you picked is still available, and you can continue.

      If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

      $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
      "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

      In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

      To do this, run the following command:

      $ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

      With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

      Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

      **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

      As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

      "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

      You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file that looks like TurboVNC64-2.1.2.exe (the version number can be different, but the 64 should be in the filename) and execute it.

      Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

      When prompted for a password, use the password you used to setup the VNC server.

      When prompted for default or empty panel, choose default.

      If you have an empty panel, you can reset your settings with the following commands:

      xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
      "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

      The VNC server can be killed by running

      vncserver -kill :6\n

      where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

      "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

      You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

      "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

      All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

      See HPC policies for more information on who is entitled to an account.

      The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

      There are two methods for connecting to UAntwerpen-HPC:

      • Using a terminal to connect via SSH.
      • Using the web portal

      The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

      If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

      The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

      "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
      • an SSH public/private key pair can be seen as a lock and a key

      • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

      • the SSH private key is like a physical key: you don't hand it out to other people.

      • anyone who has the key (and the optional password) can unlock the door and log in to the account.

      • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

      Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). A typical Windows environment does not come with pre-installed software to connect and run command-line executables on a UAntwerpen-HPC. Some tools need to be installed on your Windows machine first, before we can start the actual work.

      "}, {"location": "account/#get-putty-a-free-telnetssh-client", "title": "Get PuTTY: A free telnet/SSH client", "text": "

      We recommend to use the PuTTY tools package, which is freely available.

      You do not need to install PuTTY, you can download the PuTTY and PuTTYgen executable and run it. This can be useful in situations where you do not have the required permissions to install software on the computer you are using. Alternatively, an installation package is also available.

      You can download PuTTY from the official address: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html. You probably want the 64-bits version. If you can install software on your computer, you can use the \"Package files\", if not, you can download and use putty.exe and puttygen.exe in the \"Alternative binary files\" section.

      The PuTTY package consists of several components, but we'll only use two:

      1. PuTTY: the Telnet and SSH client itself (to login, see Open a terminal)

      2. PuTTYgen: an RSA and DSA key generation utility (to generate a key pair, see Generate a public/private key pair)

      "}, {"location": "account/#generating-a-publicprivate-key-pair", "title": "Generating a public/private key pair", "text": "

      Before requesting a VSC account, you need to generate a pair of ssh keys. You need 2 keys, a public and a private key. You can visualise the public key as a lock to which only you have the key (your private key). You can send a copy of your lock to anyone without any problems, because only you can open it, as long as you keep your private key secure. To generate a public/private key pair, you can use the PuTTYgen key generator.

      Start PuTTYgen.exe it and follow these steps:

      1. In Parameters (at the bottom of the window), choose \"RSA\" and set the number of bits in the key to 4096.

      2. Click on Generate. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field Public key for pasting into OpenSSH authorized_keys file.

      3. Next, it is advised to fill in the Key comment field to make it easier identifiable afterwards.

      4. Next, you should specify a passphrase in the Key passphrase field and retype it in the Confirm passphrase field. Remember, the passphrase protects the private key against unauthorised use, so it is best to choose one that is not too easy to guess but that you can still remember. Using a passphrase is not required, but we recommend you to use a good passphrase unless you are certain that your computer's hard disk is encrypted with a decent password. (If you are not sure your disk is encrypted, it probably isn't.)

      5. Save both the public and private keys in a folder on your personal computer (We recommend to create and put them in the folder \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\") with the buttons Save public key and Save private key. We recommend using the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

      If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the UAntwerpen-HPC clusters.

      "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

      It is possible to setup a SSH agent in Windows. This is an optional configuration to help you to keep all your SSH keys (if you have several) stored in the same key ring to avoid to type the SSH key password each time. The SSH agent is also necessary to enable SSH hops with key forwarding from Windows.

      Pageant is the SSH authentication agent used in windows. This agent should be available from the PuTTY installation package https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html or as stand alone binary package.

      After the installation just start the Pageant application in Windows, this will start the agent in background. The agent icon will be visible from the Windows panel.

      At this point the agent does not contain any private key. You should include the private key(s) generated in the previous section Generating a public/private key pair.

      1. Click on Add key

      2. Select the private key file generated in Generating a public/private key pair (\"id_rsa.ppk\" by default).

      3. Enter the same SSH key password used to generate the key. After this step the new key will be included in Pageant to manage the SSH connections.

      4. You can see the SSH key(s) available in the key ring just clicking on View Keys.

      5. You can change PuTTY setup to use the SSH agent. Open PuTTY and check Connection > SSH > Auth > Allow agent forwarding.

      Now you can connect to the login nodes as usual. The SSH agent will know which SSH key should be used and you do not have to type the SSH passwords each time, this task is done by Pageant agent automatically.

      It is also possible to use WinSCP with Pageant, see https://winscp.net/eng/docs/ui_pageant for more details.

      "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

      Visit https://account.vscentrum.be/

      You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

      Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

      Click Confirm

      You will now be taken to the authentication page of your institute.

      The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

      "}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

      All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

      Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

      After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

      This file should have been stored in the directory \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\"

      After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

      "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

      Within one day, you should receive a Welcome e-mail with your VSC account details.

      Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

      Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

      "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

      In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

      1. Create a new public/private SSH key pair from Putty. Repeat the process described in section\u00a0Generate a public/private key pair.

      2. Go to https://account.vscentrum.be/django/account/edit

      3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

      4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

      5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

      "}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

      A typical Computation workflow will be:

      1. Connect to the UAntwerpen-HPC

      2. Transfer your files to the UAntwerpen-HPC

      3. Compile your code and test it

      4. Create a job script

      5. Submit your job

      6. Wait while

        1. your job gets into the queue

        2. your job gets executed

        3. your job finishes

      7. Move your results

      We'll take you through the different tasks one by one in the following chapters.

      "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

      AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

      See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

      "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

      This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

      • AlphaFold website: https://alphafold.com/
      • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
      • AlphaFold FAQ: https://alphafold.com/faq
      • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
      • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
      • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
        • recording available on YouTube
        • slides available here (PDF)
        • see also https://www.vscentrum.be/alphafold
      "}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

      Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

      $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

      To use AlphaFold, you should load a particular module, for example:

      module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

      We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

      Warning

      When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

      Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

      $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

      The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

      As of writing this documentation the latest version is 20230310.

      Info

      The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

      The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

      "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

      The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

      export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

      Use newest version

      Do not forget to replace 20230310 with a more up to date version if available.

      "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

      AlphaFold provides a script called run_alphafold.py

      A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

      The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

      Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

      For more information about the script and options see this section in the official README.

      READ README

      It is strongly advised to read the official README provided by DeepMind before continuing.

      "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

      The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

      Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

      Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

      Info

      Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

      "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

      The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

      Using --db_preset=full_dbs, the following runtime data was collected:

      • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
      • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
      • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
      • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

      This highlights a couple of important attention points:

      • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
      • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
      • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

      With --db_preset=casp14, it is clearly more demanding:

      • On doduo, with 24 cores (1 node): still running after 48h...
      • On joltik, 1 V100 GPU + 8 cores: 4h 48min

      This highlights the difference between CPU and GPU performance even more.

      "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

      The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

      Do not forget to set up the environment (see above: Setting up the environment).

      "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

      Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

      >sequence_name\n<SEQUENCE>\n

      Then run the following command in the same directory:

      alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

      See AlphaFold output, for information about the outputs.

      Info

      For more scenarios see the example section in the official README.

      "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

      The following two example job scripts can be used as a starting point for running AlphaFold.

      The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

      To run the job scripts you need to create a file named T1050.fasta with the following content:

      >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
      source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

      "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

      Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

      Swap to the joltik GPU before submitting it:

      module swap cluster/joltik\n
      AlphaFold-gpu-joltik.sh
      #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
      "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

      Jobscript that runs AlphaFold on CPU using 24 cores on one node.

      AlphaFold-cpu-doduo.sh
      #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

      In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

      "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

      Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

      One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

      For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

      This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

      "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

      Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

      The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

      In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

      If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

      "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

      All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

      "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

      Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

      Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

      # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
      "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

      For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

      We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

      "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

      Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

      cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

      Create a job script like:

      #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

      Create an example myscript.sh:

      #!/bin/bash\n\n# prime factors\nfactor 1234567\n
      "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

      We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

      Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

      cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
      #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

      You can download linear_regression.py from the official Tensorflow repository.

      "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

      It is also possible to execute MPI jobs within a container, but the following requirements apply:

      • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

      • Use modules within the container (install the environment-modules or lmod package in your container)

      • Load the required module(s) before apptainer execution.

      • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

      Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

      cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

      For example to compile an MPI example:

      module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

      Example MPI job script:

      #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
      "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
      1. Before starting, you should always check:

        • Are there any errors in the script?

        • Are the required modules loaded?

        • Is the correct executable used?

      2. Check your computer requirements upfront, and request the correct resources in your batch job script.

        • Number of requested cores

        • Amount of requested memory

        • Requested network type

      3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

      4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

      5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

      6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

      7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

      8. Submit your job and wait (be patient) ...

      9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

      10. The runtime is limited by the maximum walltime of the queues.

      11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

      12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

      13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

      "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

      All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

      Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

      "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

      In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

      $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

      Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

      $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

      As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

      When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

      "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

      To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

      In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

      In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

      Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

      Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

      Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

      "}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

      Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

      All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

      A typical process looks like:

      1. Copy your software to the login-node of the UAntwerpen-HPC

      2. Start an interactive session on a compute node;

      3. Compile it;

      4. Test it locally;

      5. Generate your job scripts;

      6. Test it on the UAntwerpen-HPC

      7. Run it (in parallel);

      We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

      $ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
      "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

      Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

      cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

      We now list the directory and explore the contents of the \"hello.c\" program:

      $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

      hello.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

      The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

      We first need to compile this C-file into an executable with the gcc-compiler.

      First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

      $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

      A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

      Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

      $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

      It seems to work, now run it on the UAntwerpen-HPC

      qsub hello.pbs\n

      "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
      cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

      List the directory and explore the contents of the \"mpihello.c\" program:

      $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

      mpihello.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

      The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

      Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

      mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

      A new file \"hello\" has been created. Note that this program has \"execute\" rights.

      Let's test this program on the \"login\" node first:

      $ ./mpihello\nHello World from Node 0.\n

      It seems to work, now run it on the UAntwerpen-HPC.

      qsub mpihello.pbs\n
      "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

      We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

      cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

      We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

      module purge\nmodule load intel\n

      Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

      mpiicc -o mpihello mpihello.c\nls -l\n

      Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

      $ ./mpihello\nHello World from Node 0.\n

      It seems to work, now run it on the UAntwerpen-HPC.

      qsub mpihello.pbs\n

      Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

      Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

      Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

      Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

      1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

      2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

      3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

      4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

      "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

      Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

      VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

      All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

      • Use an VPN connection to connect to University of Antwerp the network (recommended).

      • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your University of Antwerp account.

        • While this web connection is active new SSH sessions can be started.

        • Active SSH sessions will remain active even when this web page is closed.

      • Contact your HPC support team (via hpc@uantwerpen.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

      Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

      ssh_exchange_identification: read: Connection reset by peer\n
      "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

      The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

      If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

      "}, {"location": "connecting/#open-a-terminal", "title": "Open a Terminal", "text": "

      You've generated a public/private key pair with PuTTYgen and have an approved account on the VSC clusters. The next step is to setup the connection to (one of) the UAntwerpen-HPC.

      In the screenshots, we show the setup for user \"vsc20167\"

      to the UAntwerpen-HPC cluster via the login node \"login.hpc.uantwerpen.be\".

      1. Start the PuTTY executable putty.exe in your directory C:\\Program Files (x86)\\PuTTY and the configuration screen will pop up. As you will often use the PuTTY tool, we recommend adding a shortcut on your desktop.

      2. Within the category <Session>, in the field <Host Name>, enter the name of the login node of the cluster (i.e., \"login.hpc.uantwerpen.be\") you want to connect to.

      3. In the category Connection > Data, in the field Auto-login username, put in <vsc20167> , which is your VSC username that you have received by e-mail after your request was approved.

      4. In the category Connection > SSH > Auth, in the field Private key file for authentication click on Browse and select the private key (i.e., \"id_rsa.ppk\") that you generated and saved above.

      5. In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox.

      6. Now go back to <Session>, and fill in \"Leibniz\" in the Saved Sessions field and press Save to store the session information.

      7. Now pressing Open, will open a terminal window and asks for you passphrase.

      8. If this is your first time connecting, you will be asked to verify the authenticity of the login node. Please see section\u00a0Warning message when first connecting to new host on how to do this.

      9. After entering your correct passphrase, you will be connected to the login-node of the UAntwerpen-HPC.

      10. To check you can now \"Print the Working Directory\" (pwd) and check the name of the computer, where you have logged in (hostname):

        $ pwd\n/user/antwerpen/201/vsc20167\n$ hostname -f\nln2.leibniz.uantwerpen.vsc\n
      11. For future PuTTY sessions, just select your saved session (i.e. \"Leibniz\") from the list, Load it and press Open.

      Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

      $ pwd\n/user/antwerpen/201/vsc20167\n

      Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

      $ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

      This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

      You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

      As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

      $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

      This directory contains:

      1. This HPC Tutorial (in either a Mac, Linux or Windows version).

      2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

      cd examples\n

      Tip

      Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

      Tip

      For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

      The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

      cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
      Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

      Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

      You can exit the connection at anytime by entering:

      $ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

      tip: Setting your Language right

      You may encounter a warning message similar to the following one during connecting:

      perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
      or any other error message complaining about the locale.

      This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

      LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

      A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

      "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

      Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back.

      "}, {"location": "connecting/#winscp", "title": "WinSCP", "text": "

      To transfer files to and from the cluster, we recommend the use of WinSCP, a graphical file management tool which can transfer files using secure protocols such as SFTP and SCP. WinSCP is freely available from http://www.winscp.net.

      To transfer your files using WinSCP,

      1. Open the program

      2. The Login menu is shown automatically (if it is closed, click New Session to open it again). Fill in the necessary fields under Session

        1. Click New Site.

        2. Enter \"login.hpc.uantwerpen.be\" in the Host name field.

        3. Enter your \"vsc-account\" in the User name field.

        4. Select SCP as the file protocol.

        5. Note that the password field remains empty.

        1. Click Advanced....

        2. Click SSH > Authentication.

        3. Select your private key in the field Private key file.

      3. Press the Save button, to save the session under Session > Sites for future access.

      4. Finally, when clicking on Login, you will be asked for your key passphrase.

      The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

      Make sure the fingerprint in the alert matches one of the following:

      - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

      If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

      Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

      Now, try out whether you can transfer an arbitrary file from your local machine to the HPC and back.

      "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

      See the section on rsync in chapter 5 of the Linux intro manual.

      "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

      It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

      For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

      ssh ln2.leibniz.uantwerpen.vsc\n
      This is also possible the other way around.

      If you want to find out which login host you are connected to, you can use the hostname command.

      $ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

      Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

      • screen
      • tmux
      "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

      It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

      In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

      Check if any cron script is already set in the current login node with:

      crontab -l\n

      At this point you can add/edit (with vi editor) any cron script running the command:

      crontab -e\n
      "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
       15 5 * * * ~/runscript.sh >& ~/job.out\n

      where runscript.sh has these lines in this example:

      runscript.sh
      #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

      In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

      Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

      ssh gligar07    # or gligar08\n
      "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

      You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

      EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

      "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

      For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

      • applying custom patches to the software that only you or your group are using

      • evaluating new software versions prior to requesting a central software installation

      • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

      "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

      Before you use EasyBuild, you need to configure it:

      "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

      This is where EasyBuild can find software sources:

      EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
      • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

      • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

      "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

      This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

      export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

      On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

      "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

      This is where EasyBuild will install the software (and accompanying modules) to.

      For example, to let it use $VSC_DATA/easybuild, use:

      export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

      Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

      Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

      To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

      "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

      Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

      module load EasyBuild\n
      "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

      EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

      $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

      For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

      eb example-1.2.1-foss-2024a.eb --robot\n
      "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

      To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

      To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

      eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

      To try to install example v1.2.5 with a different compiler toolchain:

      eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
      "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

      To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

      "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

      To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

      module use $EASYBUILD_INSTALLPATH/modules/all\n

      It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

      "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

      As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

      Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

      Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

      There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

      Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

      Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

      This chapter shows you how to measure:

      1. Walltime
      2. Memory usage
      3. CPU usage
      4. Disk (storage) needs
      5. Network bottlenecks

      First, we allocate a compute node and move to our relevant directory:

      qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
      "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

      One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

      The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

      Test the time command:

      $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

      It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

      It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

      The walltime can be specified in a job scripts as:

      #PBS -l walltime=3:00:00:00\n

      or on the command line

      qsub -l walltime=3:00:00:00\n

      It is recommended to always specify the walltime for a job.

      "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

      In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

      The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

      First compile the program on your machine and then test it for 1 GB:

      $ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
      "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

      The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

      $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

      Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

      It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

      "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

      The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

      To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

      $ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

      Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

      $ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

      Whereby:

      1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
      2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
      3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
      4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

      Monitor will write the CPU usage and memory consumption of simulation to standard error.

      This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

      $ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

      For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

      $ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

      Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

      The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

      $ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

      Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

      Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

      top

      provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

      htop

      is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

      "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

      Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

      Sequential or single-node applications:

      The maximum amount of physical memory used by the job can be specified in a job script as:

      #PBS -l mem=4gb\n

      or on the command line

      qsub -l mem=4gb\n

      This setting is ignored if the number of nodes is not\u00a01.

      Parallel or multi-node applications:

      When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

      For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

      #PBS -l pmem=2gb\n

      or on the command line

      $ qsub -l pmem=2gb\n

      (and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

      "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

      Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

      "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

      The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

      The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

      $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

      Or if you want to see it in a more readable format, execute:

      $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

      Note

      Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

      In order to specify the number of nodes and the number of processors per node in your job script, use:

      #PBS -l nodes=N:ppn=M\n

      or with equivalent parameters on the command line

      qsub -l nodes=N:ppn=M\n

      This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

      Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

      "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

      The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

      We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

      $ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

      And then start to monitor the eat_cpu program:

      $ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

      We notice that it the program keeps its CPU nicely busy at 100%.

      Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

      Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

      This could also be monitored with the htop command:

      htop\n
      Example output:
        1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

      The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

      If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

      "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

      It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

      But how can you maximise?

      1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
      2. Develop your parallel program in a smart way.
      3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
      4. Correct your request for CPUs in your job script.
      "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

      On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

      The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

      The load averages differ from CPU percentage in two significant ways:

      1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
      2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
      "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

      What is the \"optimal load\" rule of thumb?

      The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

      In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

      Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

      The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

      1. When you are running computational intensive applications, one application per processor will generate the optimal load.
      2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

      The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

      The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

      "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

      The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

      The uptime command will show us the average load

      $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

      Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

      $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
      You can also read it in the htop command.

      "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

      It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

      But how can you maximise?

      1. Profile your software to improve its performance.
      2. Configure your software (e.g., to exactly use the available amount of processors in a node).
      3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
      4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
      5. Correct your request for CPUs in your job script.

      And then check again.

      "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

      Some programs generate intermediate or output files, the size of which may also be a useful metric.

      Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

      We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

      $ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

      The monitor tool provides an option (-f) to display the size of one or more files:

      $ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

      Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

      It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

      Several actions can be taken, to avoid storage problems:

      1. Be aware of all the files that are generated by your program. Also check out the hidden files.
      2. Check your quota consumption regularly.
      3. Clean up your files regularly.
      4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
      5. Make sure your programs clean up their temporary files after execution.
      6. Move your output results to your own computer regularly.
      7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
      "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

      Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

      Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

      The parameter to add in your job script would be:

      #PBS -l ib\n

      If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

      #PBS -l gbe\n
      "}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

      Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

      $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

      The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

      "}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

      Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

      When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

      "}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

      It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

      $ monitor -p 18749\n

      Note that this feature can be (ab)used to monitor specific sub-processes.

      "}, {"location": "getting_started/", "title": "Getting Started", "text": "

      Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

      In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

      Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

      "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

      To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

      If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

      "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
      1. Connect to the login nodes
      2. Transfer your files to the UAntwerpen-HPC
      3. Optional: compile your code and test it
      4. Create a job script and submit your job
      5. Wait for job to be executed
      6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

      We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

      "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

      There are two options to connect

      • Using a terminal to connect via SSH (for power users) (see First Time connection to the UAntwerpen-HPC)
      • Using the web portal

      Considering your operating system is Windows, it is recommended to use the web portal.

      The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

      See shell access when using the web portal, or connection to the UAntwerpen-HPC when using a terminal.

      Make sure you can get to a shell access to the UAntwerpen-HPC before proceeding with the next steps.

      Info

      When having problems see the connection issues section on the troubleshooting page.

      "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

      Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

      Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

      The HPC-UGent web portal provides a file browser that allows uploading files. For more information see the file browser section.

      Upload both files (run.sh and tensorflow-mnist.py) to your home directory and go back to your shell.

      Info

      As an alternative, you can use WinSCP (see our section)

      When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

      $ ls ~\nrun.sh tensorflow_mnist.py\n

      When you do not see these files, make sure you uploaded the files to your home directory.

      "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

      Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

      A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

      Our job script looks like this:

      run.sh

      #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
      As you can see this job script will run the Python script named tensorflow_mnist.py.

      The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

      module swap cluster/{{ othercluster }}\n

      Tip

      When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

      This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

      $ qsub run.sh\n433253.leibniz\n

      This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

      Make sure you understand what the module command does

      Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

      It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

      When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

      For detailed information about module commands, read the running batch jobs chapter.

      "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

      Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

      You can get an overview of the active jobs using the qstat command:

      $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

      Eventually, after entering qstat again you should see that your job has started running:

      $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

      If you don't see your job in the output of the qstat command anymore, your job has likely completed.

      Read this section on how to interpret the output.

      "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

      When your job finishes it generates 2 output files:

      • One for normal output messages (stdout output channel).
      • One for warning and error messages (stderr output channel).

      By default located in the directory where you issued qsub.

      In our example when running ls in the current directory you should see 2 new files:

      • run.sh.o433253.leibniz, containing normal output messages produced by job 433253.leibniz;
      • run.sh.e433253.leibniz, containing errors and warnings produced by job 433253.leibniz.

      Info

      run.sh.e433253.leibniz should be empty (no errors or warnings).

      Use your own job ID

      Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

      When examining the contents of run.sh.o433253.leibniz you will see something like this:

      Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

      Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

      Warning

      When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

      For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

      "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
      • Running interactive jobs
      • Running jobs with input/output data
      • Multi core jobs/Parallel Computing
      • Interactive and debug cluster

      For more examples see Program examples and Job script examples

      "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

      To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

      module swap cluster/joltik\n

      To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

      module swap cluster/accelgor\n

      Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

      "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

      To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

      Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

      "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

      See https://www.ugent.be/hpc/en/infrastructure.

      "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

      There are 2 main ways to ask for GPUs as part of a job:

      • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

      • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

      Some background:

      • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

      • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

      "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

      Some important attention points:

      • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

      • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

      • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

      • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

      "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

      Use module avail to check for centrally installed software.

      The subsections below only cover a couple of installed software packages, more are available.

      "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

      Please consult module avail GROMACS for a list of installed versions.

      "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

      Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

      Please consult module avail Horovod for a list of installed versions.

      Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

      At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

      "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

      Please consult module avail PyTorch for a list of installed versions.

      "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

      Please consult module avail TensorFlow for a list of installed versions.

      Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

      "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
      #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
      "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

      Please consult module avail AlphaFold for a list of installed versions.

      For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

      "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

      In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

      "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

      The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

      This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

      Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

      • Interactive jobs (see chapter\u00a0Running interactive jobs)

      • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

      • Jobs requiring few resources

      • Debugging programs

      • Testing and debugging job scripts

      "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

      To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

      module swap cluster/donphan\n

      Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

      "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

      Some limits are in place for this cluster:

      • each user may have at most 5 jobs in the queue (both running and waiting to run);

      • at most 3 jobs per user can be running at the same time;

      • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

      In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

      Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

      "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

      Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

      All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

      "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

      \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

      While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

      A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

      The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

      Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

      Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

      "}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

      The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

      The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

      The UAntwerpen-HPC consists of:

      In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

      The UAntwerpen-HPC currently consists of:

      Leibniz:

      1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

      2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

      3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

      4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

      5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

      6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

      7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

      The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

      Vaughan:

      1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

      The nodes are connected using an InfiniBand HDR100 network.

      All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

      Two tools perform the Job management and job scheduling:

      1. TORQUE: a resource manager (based on PBS);

      2. Moab: job scheduler and management tools.

      For maintenance and monitoring, we use:

      1. Ganglia: monitoring software;

      2. Icinga and Nagios: alert manager.

      "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

      The HPC infrastructure is not a magic computer that automatically:

      1. runs your PC-applications much faster for bigger problems;

      2. develops your applications;

      3. solves your bugs;

      4. does your thinking;

      5. ...

      6. allows you to play games even faster.

      The UAntwerpen-HPC does not replace your desktop computer.

      "}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

      Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

      It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

      "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

      In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

      Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

      "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

      Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

      Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

      The two parallel programming paradigms most used in HPC are:

      • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

      • MPI for distributed memory systems (multiprocessing): on multiple nodes

      Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

      "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

      Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

      It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

      Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

      "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

      You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

      For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

      Supported and commonly used compilers are GCC, clang, J2EE and Intel

      Commonly used software packages are:

      • in bioinformatics: beagle, Beast, bowtie, MrBayes, SAMtools

      • in chemistry: ABINIT, CP2K, Gaussian, Gromacs, LAMMPS, NWChem, Quantum Espresso, Siesta, VASP

      • in engineering: COMSOL, OpenFOAM, Telemac

      • in mathematics: JAGS, MATLAB, R

      • for visuzalization: Gnuplot, ParaView.

      Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

      "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

      All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

      Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

      A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

      "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

      A typical workflow looks like:

      1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

      2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

      3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

      4. Create a job script and submit your job (see Running batch jobs)

      5. Get some coffee and be patient:

        1. Your job gets into the queue

        2. Your job gets executed

        3. Your job finishes

      6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

      "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

      When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

      Do not hesitate to contact the UAntwerpen-HPC staff for any help.

      1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

      "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

      This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

      • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

      • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

      • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

      • To use a situational parameter, remove one '#' at the beginning of the line.

      simple_jobscript.sh
      #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
      "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

      Here's an example of a single-core job script:

      single_core.sh
      #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
      1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

      2. A module for Python 3.6 is loaded, see also section Modules.

      3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

      4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

      5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

      "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

      Here's an example of a multi-core job script that uses mympirun:

      multi_core.sh
      #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

      An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

      "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

      If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

      This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

      timeout.sh
      #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

      The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

      example_program.sh
      #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
      "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

      A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

      "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

      Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

      After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

      When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

      and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

      This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

      "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

      A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

      To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

      We can see all available versions of the SciPy module by using module avail SciPy-bundle:

      $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

      Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

      Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

      $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

      The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

      It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

      $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
      This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

      If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

      $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

      Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

      "}, {"location": "known_issues/", "title": "Known issues", "text": "

      This page provides details on a couple of known problems, and the workarounds that are available for them.

      If you have any questions related to these issues, please contact the UAntwerpen-HPC.

      • Operation not permitted error for MPI applications
      "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

      When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

      Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

      This error means that an internal problem has occurred in OpenMPI.

      "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

      This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

      It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

      "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

      We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

      "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

      A workaround as been implemented in mympirun (version 5.4.0).

      Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

      module load vsc-mympirun\n

      and launch your MPI application using the mympirun command.

      For more information, see the mympirun documentation.

      "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

      If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

      export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
      "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

      We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

      "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

      There are two important motivations to engage in parallel programming.

      1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

      2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

      On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

      There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

      Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

      Tip

      You can request more nodes/cores by adding following line to your run script.

      #PBS -l nodes=2:ppn=10\n
      This queues a job that claims 2 nodes and 10 cores.

      Warning

      Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

      "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

      Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

      This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

      Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

      Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

      Go to the example directory:

      cd ~/examples/Multi-core-jobs-Parallel-Computing\n

      Note

      If the example directory is not yet present, copy it to your home directory:

      cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

      Study the example first:

      T_hello.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

      And compile it (whilst including the thread library) and run and test it on the login-node:

      $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

      Now, run it on the cluster and check the output:

      $ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

      Tip

      If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

      "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

      OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

      An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

      Here is the general code structure of an OpenMP program:

      #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

      "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

      By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

      "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

      Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

      omp1.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

      And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

      $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

      Now run it in the cluster and check the result again.

      $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
      "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

      Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

      omp2.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

      And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

      $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

      Now run it in the cluster and check the result again.

      $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
      "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

      Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

      omp3.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

      And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

      $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

      Now run it in the cluster and check the result again.

      $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
      "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

      There are a host of other directives you can issue using OpenMP.

      Some other clauses of interest are:

      1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

      2. nowait: threads will not wait until everybody is finished

      3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

      4. if: allows you to parallelise only if a certain condition is met

      5. ...\u00a0and a host of others

      Tip

      If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

      "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

      The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

      In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

      The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

      The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

      One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

      Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

      Study the MPI-programme and the PBS-file:

      mpi_hello.c
      /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
      mpi_hello.pbs
      #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

      and compile it:

      $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

      mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

      Run the parallel program:

      $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

      The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

      MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

      Tip

      If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

      "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

      A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

      Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

      These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

      One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

      The \"Worker framework\" has been developed to address this issue.

      It can handle many small jobs determined by:

      parameter variations

      i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

      job arrays

      i.e., each individual job got a unique numeric identifier.

      Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

      However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

      "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

      First go to the right directory:

      cd ~/examples/Multi-job-submission/par_sweep\n

      Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

      $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

      For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

      par_sweep/weather
      #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

      A job script that would run this as a job for the first parameters (p01) would then look like:

      par_sweep/weather_p01.pbs
      #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

      When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

      To submit the job, the user would use:

       $ qsub weather_p01.pbs\n
      However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

      $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

      It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

      In order to make our PBS generic, the PBS file can be modified as follows:

      par_sweep/weather.pbs
      #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

      Note that:

      1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

      2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

      3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

      The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

      The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

      $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

      Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

      Warning

      When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

      module swap env/slurm/donphan\n

      instead of

      module swap cluster/donphan\n
      We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

      "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

      First go to the right directory:

      cd ~/examples/Multi-job-submission/job_array\n

      As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

      The following bash script would submit these jobs all one by one:

      #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

      This, as said before, could be disturbing for the job scheduler.

      Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

      Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

      The details are

      1. a job is submitted for each number in the range;

      2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

      3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

      The job could have been submitted using:

      qsub -t 1-100 my_prog.pbs\n

      The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

      To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

      A typical job script for use with job arrays would look like this:

      job_array/job_array.pbs
      #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

      In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

      Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

      $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

      For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

      job_array/test_set
      #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

      Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

      job_array/test_set.pbs
      #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

      Note that

      1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

      2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

      The job is now submitted as follows:

      $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

      The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

      Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

      $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
      "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

      Often, an embarrassingly parallel computation can be abstracted to three simple steps:

      1. a preparation phase in which the data is split up into smaller, more manageable chunks;

      2. on these chunks, the same algorithm is applied independently (these are the work items); and

      3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

      The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

      cd ~/examples/Multi-job-submission/map_reduce\n

      The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

      First study the scripts:

      map_reduce/pre.sh
      #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
      map_reduce/post.sh
      #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

      Then one can submit a MapReduce style job as follows:

      $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

      Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

      "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

      The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

      The \"Worker Framework\" will be effective when

      1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

      2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

      "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

      Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

      tail -f run.pbs.log433253.leibniz\n

      Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

      watch -n 60 wsummarize run.pbs.log433253.leibniz\n

      This will summarise the log file every 60 seconds.

      "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

      Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

      #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

      Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

      Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

      "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

      Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

      wresume -jobid 433253.leibniz\n

      This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

      wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

      Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

      wresume -jobid 433253.leibniz -retry\n

      By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

      "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

      This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

      $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
      "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

      When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

      to check for the available versions of worker, use the following command:

      $ module avail worker\n
      1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

      "}, {"location": "mympirun/", "title": "Mympirun", "text": "

      mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

      In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

      "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

      Before using mympirun, we first need to load its module:

      module load vsc-mympirun\n

      As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

      The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

      For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

      "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

      There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

      By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

      "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

      This is the most commonly used option for controlling the number of processing.

      The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

      $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
      "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

      There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

      See vsc-mympirun README for a detailed explanation of these options.

      "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

      You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

      $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
      "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

      In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

      "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

      There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

      • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

        • see also http://openfoam.com/history/
      • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

        • see also https://openfoam.org/download/history/
      • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

      Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

      "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

      The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

      • OpenFOAM websites:

        • https://openfoam.com

        • https://openfoam.org

        • http://wikki.gridcore.se/foam-extend

      • OpenFOAM user guides:

        • https://www.openfoam.com/documentation/user-guide

        • https://cfd.direct/openfoam/user-guide/

      • OpenFOAM C++ source code guide: https://cpp.openfoam.org

      • tutorials: https://wiki.openfoam.com/Tutorials

      • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

      Other useful OpenFOAM documentation:

      • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

      • http://www.dicat.unige.it/guerrero/openfoam.html

      "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

      To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

      "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

      First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

      $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

      To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

      To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

      module load OpenFOAM/11-foss-2023a\n
      "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

      OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

      source $FOAM_BASH\n
      "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

      If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

      source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

      Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

      "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

      If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

      unset $FOAM_SIGFPE\n

      Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

      As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

      "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

      The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

      • generate the mesh;

      • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

      After running the simulation, some post-processing steps are typically performed:

      • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

      • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

      Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

      Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

      One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

      For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

      "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

      For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

      "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

      When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

      You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

      "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

      It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

      See Basic usage for how to get started with mympirun.

      To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

      export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

      Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

      "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

      To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

      Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

      number of processor directories = 4 is not equal to the number of processors = 16\n

      In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

      • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

      • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

      See Controlling number of processes to control the number of process mympirun will start.

      This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

      To visualise the processor domains, use the following command:

      mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

      and then load the VTK files generated in the VTK folder into ParaView.

      "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

      OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

      Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

      • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

      • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

      • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

      • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

      • if the results per individual time step are large, consider setting writeCompression to true;

      For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

      These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

      "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

      See https://cfd.direct/openfoam/user-guide/compiling-applications/.

      "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

      Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

      OpenFOAM_damBreak.sh
      #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
      "}, {"location": "program_examples/", "title": "Program examples", "text": "

      If you have not done so already copy our examples to your home directory by running the following command:

       cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

      ~(tilde) refers to your home directory, the directory you arrive by default when you login.

      Go to our examples:

      cd ~/examples/Program-examples\n

      Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

      1. 01_Python

      2. 02_C_C++

      3. 03_Matlab

      4. 04_MPI_C

      5. 05a_OMP_C

      6. 05b_OMP_FORTRAN

      7. 06_NWChem

      8. 07_Wien2k

      9. 08_Gaussian

      10. 09_Fortran

      11. 10_PQS

      The above 2 OMP directories contain the following examples:

      C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

      Compile by any of the following commands:

      Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

      Be invited to explore the examples.

      "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

      Remember to substitute the usernames, login nodes, file names, ...for your own.

      Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

      Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

      "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

      Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

      This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

      It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

      "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

      As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

      For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

      $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

      Initially there will be only one RHEL 9 login node. As needed a second one will be added.

      When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

      "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

      To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

      This includes (per user):

      • max. of 2 CPU cores in use
      • max. 8 GB of memory in use

      For more intensive tasks you can use the interactive and debug clusters through the web portal.

      "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

      The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

      However, there will be impact on the availability of software that is made available via modules.

      Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

      This includes all software installations on top of a compiler toolchain that is older than:

      • GCC(core)/12.3.0
      • foss/2023a
      • intel/2023a
      • gompi/2023a
      • iimpi/2023a
      • gfbf/2023a

      (or another toolchain with a year-based version older than 2023a)

      The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

      foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

      If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

      It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

      "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

      We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

      cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

      Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

      We will keep this page up to date when more specific dates have been planned.

      Warning

      This planning below is subject to change, some clusters may get migrated later than originally planned.

      Please check back regularly.

      "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

      If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

      "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

      In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

      When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

      The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

      "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

      Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

      "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

      The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

      All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

      "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

      In order to administer the active software and their environment variables, the module system has been developed, which:

      1. Activates or deactivates software packages and their dependencies.

      2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

      3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

      4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

      5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

      This is all managed with the module command, which is explained in the next sections.

      There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

      "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

      A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

      module available\n

      It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

      This will give some output such as:

      $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

      Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

      $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

      As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

      This gives a full list of software packages that can be loaded.

      The casing of module names is important: lowercase and uppercase letters matter in module names.

      "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

      The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

      Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

      E.g., foss/2024a is the first version of the foss toolchain in 2024.

      The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

      "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

      To \"activate\" a software package, you load the corresponding module file using the module load command:

      module load example\n

      This will load the most recent version of example.

      For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

      **However, you should specify a particular version to avoid surprises when newer versions are installed:

      module load secondexample/2.7-intel-2016b\n

      The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

      Modules need not be loaded one by one; the two module load commands can be combined as follows:

      module load example/1.2.3 secondexample/2.7-intel-2016b\n

      This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

      "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

      Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

      $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

      You can also just use the ml command without arguments to list loaded modules.

      It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

      "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

      To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

      $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

      To unload the secondexample module, you can also use ml -secondexample.

      Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

      "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

      In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

      module purge\n

      However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

      "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

      Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

      Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

      module load example\n

      rather than

      module load example/1.2.3\n

      Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

      Consider the following example modules:

      $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

      Let's now generate a version conflict with the example module, and see what happens.

      $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

      Note: A module swap command combines the appropriate module unload and module load commands.

      "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

      With the module spider command, you can search for modules:

      $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

      It's also possible to get detailed information about a specific module:

      $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
      "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

      To get a list of all possible commands, type:

      module help\n

      Or to get more information about one specific module package:

      $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
      "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

      If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

      In each module command shown below, you can replace module with ml.

      First, load all modules you want to include in the collections:

      module load example/1.2.3 secondexample/2.7-intel-2016b\n

      Now store it in a collection using module save. In this example, the collection is named my-collection.

      module save my-collection\n

      Later, for example in a jobscript or a new session, you can load all these modules with module restore:

      module restore my-collection\n

      You can get a list of all your saved collections with the module savelist command:

      $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

      To get a list of all modules a collection will load, you can use the module describe command:

      $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

      To remove a collection, remove the corresponding file in $HOME/.lmod.d:

      rm $HOME/.lmod.d/my-collection\n
      "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

      To see how a module would change the environment, you can use the module show command:

      $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

      It's also possible to use the ml show command instead: they are equivalent.

      Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

      You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

      If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

      "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

      To check how much jobs are running in what queues, you can use the qstat -q command:

      $ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

      Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

      "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

      You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

      You can also get this information in text form (per cluster separately) with the pbsmon command:

      $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

      pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

      "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

      Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

      As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

      Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

      cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

      First go to the directory with the first examples by entering the command:

      cd ~/examples/Running-batch-jobs\n

      Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

      The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

      A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

      1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

      Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

      List and check the contents with:

      $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

      In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

      1. The Perl script calculates the first 30 Fibonacci numbers.

      2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

      We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

      On the command line, you would run this using:

      $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

      Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

      The job script contains a description of the job by specifying the command that need to be executed on the compute node:

      fibo.pbs
      #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

      So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

      This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

      $ qsub fibo.pbs\n433253.leibniz\n

      The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

      Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

      To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

      Your job is now waiting in the queue for a free workernode to start on.

      Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

      After your job was started, and ended, check the contents of the directory:

      $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

      Explore the contents of the 2 new files:

      $ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

      These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

      "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

      It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

      To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

      Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

      env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

      We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

      We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

      "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

      Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

      qstat 12345\n

      To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

      ::: prompt :::

      This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

      To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

      ::: prompt :::

      To show on which compute nodes your job is running, at least, when it is running:

      qstat -n 12345\n

      To remove a job from the queue so that it will not run, or to stop a job that is already running.

      qdel 12345\n

      When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

      $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

      Here:

      Job ID the job's unique identifier

      Name the name of the job

      User the user that owns the job

      Time Use the elapsed walltime for the job

      Queue the queue the job is in

      The state S can be any of the following:

      State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

      User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

      "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

      As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

      You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

      The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

      ::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

      And to get the full detail of all the jobs, which are in the system:

      ::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

      eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

      135 eligible jobs

      blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

      There are 3 categories, the , and jobs.

      Active jobs

      are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

      Started

      attempting to start, performing pre-start tasks

      Running

      currently executing the user application

      Suspended

      has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

      Cancelling

      has been cancelled, in process of cleaning up

      Eligible jobs

      are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

      Blocked jobs

      are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

      Idle

      when the job violates a fairness policy

      Userhold

      or systemhold when it is user or administrative hold

      Batchhold

      when the requested resources are not available or the resource manager has repeatedly failed to start the job

      Deferred

      when a temporary hold when the job has been unable to start after a specified number of attempts

      Notqueued

      when scheduling daemon is unavailable

      "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

      Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

      It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

      "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

      The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

      qsub -l walltime=2:30:00 ...\n

      For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

      If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

      qsub -l mem=4gb ...\n

      The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

      qsub -l nodes=5:ppn=2 ...\n

      The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

      qsub -l nodes=1:westmere\n

      The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

      These options can either be specified on the command line, e.g.

      qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

      or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

      #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

      Note that the resources requested on the command line will override those specified in the PBS file.

      "}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

      The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

      ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

      Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

      shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

      To get a list of all properties defined for all nodes, enter

      ::: prompt :::

      This list will also contain properties referring to, e.g., network components, rack number, etc.

      "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

      At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

      When you navigate to that directory and list its contents, you should see them:

      $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

      In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

      Inspect the generated output and error files:

      $ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
      "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

      Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

      You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

      ::: prompt :::

      Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

      ::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

      "}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

      You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

      #PBS -m b \n#PBS -m e \n#PBS -m a\n

      or

      #PBS -m abe\n

      These options can also be specified on the command line. Try it and see what happens:

      qsub -m abe fibo.pbs\n

      The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

      qsub -m b -M john.smith@example.com fibo.pbs\n

      will send an e-mail to john.smith@example.com when the job begins.

      "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

      If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

      So the following example might go wrong:

      $ qsub job1.sh\n$ qsub job2.sh\n

      You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

      $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

      afterok means \"After OK\", or in other words, after the first job successfully completed.

      It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

      1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

      "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

      Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

      Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

      Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

      The syntax for qsub for submitting an interactive PBS job is:

      $ qsub -I <... pbs directives ...>\n
      "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

      Tip

      Find the code in \"~/examples/Running_interactive_jobs\"

      First of all, in order to know on which computer you're working, enter:

      $ hostname -f\nln2.leibniz.uantwerpen.vsc\n

      This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

      The most basic way to start an interactive job is the following:

      $ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

      There are two things of note here.

      1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

      2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

      In order to know on which compute-node you're working, enter again:

      $ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

      Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

      This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

      The computer \"r1c02cn3\" stands for:

      1. \"r5\" is rack #5.

      2. \"c3\" is enclosure/chassis #3.

      3. \"cn08\" is compute node #08.

      With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

      Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

      $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

      You can exit the interactive session with:

      $ exit\n

      Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

      You can work for 3 hours by:

      qsub -I -l walltime=03:00:00\n

      If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

      "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

      To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

      The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

      "}, {"location": "running_interactive_jobs/#install-xming", "title": "Install Xming", "text": "

      The first task is to install the Xming software.

      1. Download the Xming installer from the following address: http://www.straightrunning.com/XmingNotes/. Either download Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.

      2. Run the Xming setup program on your Windows desktop.

      3. Keep the proposed default folders for the Xming installation.

      4. When selecting the components that need to be installed, make sure to select \"XLaunch wizard\" and \"Normal PuTTY Link SSH client\".

      5. We suggest to create a Desktop icon for Xming and XLaunch.

      6. And Install.

      And now we can run Xming:

      1. Select XLaunch from the Start Menu or by double-clicking the Desktop icon.

      2. Select Multiple Windows. This will open each application in a separate window.

      3. Select Start no client to make XLaunch wait for other programs (such as PuTTY).

      4. Select Clipboard to share the clipboard.

      5. Finally Save configuration into a file. You can keep the default filename and save it in your Xming installation directory.

      6. Now Xming is running in the background ... and you can launch a graphical application in your PuTTY terminal.

      7. Open a PuTTY terminal and connect to the HPC.

      8. In order to test the X-server, run \"xclock\". \"xclock\" is the standard GUI clock for the X Window System.

      xclock\n

      You should see the XWindow clock application appearing on your Windows machine. The \"xclock\" application runs on the login-node of the UAntwerpen-HPC, but is displayed on your Windows machine.

      You can close your clock and connect further to a compute node with again your X-forwarding enabled:

      $ qsub -I -X\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n$ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n$ xclock\n

      and you should see your clock again.

      "}, {"location": "running_interactive_jobs/#ssh-tunnel", "title": "SSH Tunnel", "text": "

      In order to work in client/server mode, it is often required to establish an SSH tunnel between your Windows desktop machine and the compute node your job is running on. PuTTY must have been installed on your computer, and you should be able to connect via SSH to the HPC cluster's login node.

      Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunnelling.

      There are several cases where this is useful:

      1. Running graphical applications on the cluster: The graphical program cannot directly communicate with the X Window server on your local system. In this case, the tunnelling is easy to set up as PuTTY will do it for you if you select the right options on the X11 settings page as explained on the page about text-mode access using PuTTY.

      2. Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualisation mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. This scenario is explained on this page.

      3. Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.

      Procedure: A tunnel from a local client to a specific computer node on the cluster

      1. Log in on the login node via PuTTY.

      2. Start the server job, note the compute node's name the job is running on (e.g., r1c02cn3.leibniz.antwerpen.vsc), as well as the port the server is listening on (e.g., \"54321\").

      3. Set up the tunnel:

        1. Close your current PuTTY session.

        2. In the \"Category\" pane, expand Connection>SSh, and select as show below:

        3. In the Source port field, enter the local port to use (e.g., 5555).

        4. In the Destination field, enter : (e.g., r1c02cn3.leibniz.antwerpen.vsc:54321 as in the example above, these are the details you noted in the second step).

        5. Click the Add button.

        6. Click the Open button

        7. The tunnel is now ready to use.

          "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

          We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

          Now run the message program:

          cd ~/examples/Running_interactive_jobs\n./message.py\n

          You should see the following message appearing.

          Click any button and see what happens.

          -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
          "}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

          In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

          ::: prompt :::

          And start the MATLAB interactive environment:

          ::: prompt :::

          And start the fibo2.m program in the command window:

          ::: prompt fx >> :::

          ::: center :::

          And see the displayed calculations, ...

          ::: center :::

          as well as the nice \"plot\" appearing:

          ::: center :::

          You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

          ::: prompt fx >> :::

          "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

          You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

          "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

          First go to the directory:

          cd ~/examples/Running_jobs_with_input_output_data\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          ```

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

          List and check the contents with:

          $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

          Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

          file1.py
          #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

          The code of the Python script, is self explanatory:

          1. In step 1, we write something to the file hello.txt in the current directory.

          2. In step 2, we write some text to stdout.

          3. In step 3, we write to stderr.

          Check the contents of the first job script:

          file1a.pbs
          #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

          You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

          Submit it:

          qsub file1a.pbs\n

          After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

          $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

          Some observations:

          1. The file Hello.txt was created in the current directory.

          2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

          3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

          Inspect their contents ...\u00a0and remove the files

          $ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

          Tip

          Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

          "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

          Check the contents of the job script and execute it.

          file1b.pbs
          #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

          Inspect the contents again ...\u00a0and remove the generated files:

          $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

          Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

          "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

          You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

          file1c.pbs
          #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
          "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

          The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

          Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

          The following locations are available:

          Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

          Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

          We elaborate more on the specific function of these locations in the following sections.

          "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

          Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

          The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

          The operating system also creates a few files and folders here to manage your account. Examples are:

          File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

          Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

          "}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

          In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

          The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

          "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

          To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

          You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

          Each type of scratch has its own use:

          Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

          Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

          At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

          Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

          Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

          Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

          The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

          With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

          ::: prompt ---------------------------------------------------------- Your quota is:

          Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

          File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

          :::

          Make sure to regularly check these numbers at log-in!

          The rules are:

          1. You will only receive a warning when you have reached the soft limit of either quota.

          2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

          3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

          We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

          "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

          Tip

          Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

          In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

          1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

          2. repeat this action 30.000 times;

          3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

          $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

          Tip

          Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

          In this exercise, you will

          1. Generate the file \"primes_1.txt\" again as in the previous exercise;

          2. open the file;

          3. read it line by line;

          4. calculate the average of primes in the line;

          5. count the number of primes found per line;

          6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job:

          $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

          The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

          • the amount of disk space; and

          • the number of files

          that can be made available to each individual UAntwerpen-HPC user.

          The quota of disk space and number of files for each UAntwerpen-HPC user is:

          HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

          Tip

          The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

          Tip

          "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

          The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

          $ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          or on the UAntwerp clusters

          $ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

          Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

          $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

          This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

          If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

          $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

          If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

          $ du -s\n5632 .\n$ du -s -h\n

          If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

          $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

          Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

          $ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

          We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

          Try:

          $ tree -s -d\n

          However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

          "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

          Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

          Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

          To change the group of a directory and it's underlying directories and files, you can use:

          chgrp -R groupname directory\n
          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
          1. Get the group name you want to belong to.

          2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
          1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

          2. Fill out the group name. This cannot contain spaces.

          3. Put a description of your group in the \"Info\" field.

          4. You will now be a member and moderator of your newly created group.

          "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

          Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

          "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

          You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

          $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

          We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

          "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

          A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

          "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

          This section will explain how to create, activate, use and deactivate Python virtual environments.

          "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

          A Python virtual environment can be created with the following command:

          python -m venv myenv      # Create a new virtual environment named 'myenv'\n

          This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

          Warning

          When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

          "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

          To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

          source myenv/bin/activate                    # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

          After activating the virtual environment, you can install additional Python packages with pip install:

          pip install example_package1\npip install example_package2\n

          These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

          It is now possible to run Python scripts that use the installed packages in the virtual environment.

          Tip

          When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

          Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

          To check if a package is available as a module, use:

          module av package_name\n

          Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

          module show module_name\n

          to check which extensions are included in a module (if any).

          "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

          Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

          example.py
          import example_package1\nimport example_package2\n...\n
          python example.py\n
          "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

          When you are done using the virtual environment, you can deactivate it. To do that, run:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

          You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

          pytorch_poutyne.py
          import torch\nimport poutyne\n\n...\n

          We load a PyTorch package as a module and install Poutyne in a virtual environment:

          module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

          While the virtual environment is activated, we can run the script without any issues:

          python pytorch_poutyne.py\n

          Deactivate the virtual environment when you are done:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

          To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

          module swap cluster/donphan\nqsub -I\n

          After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

          Naming a virtual environment

          When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

          python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
          "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

          This section will combine the concepts discussed in the previous sections to:

          1. Create a virtual environment on a specific cluster.
          2. Combine packages installed in the virtual environment with modules.
          3. Submit a job script that uses the virtual environment.

          The example script that we will run is the following:

          pytorch_poutyne.py
          import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

          First, we create a virtual environment on the donphan cluster:

          module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

          Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

          jobscript.pbs
          #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

          Next, we submit the job script:

          qsub jobscript.pbs\n

          Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

          "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

          Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

          For example, if we create a virtual environment on the skitty cluster,

          module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

          return to the login node by pressing CTRL+D and try to use the virtual environment:

          $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

          we are presented with the illegal instruction error. More info on this here

          "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

          When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

          python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

          Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

          "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

          There are two main reasons why this error could occur.

          1. You have not loaded the Python module that was used to create the virtual environment.
          2. You loaded or unloaded modules while the virtual environment was activated.
          "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

          If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

          The following commands illustrate this issue:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

          module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

          You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          The solution is to only modify modules when not in a virtual environment.

          "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

          Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

          This documentation only covers aspects of using Singularity on the infrastructure.

          "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

          The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via .

          "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

          Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

          When you make Singularity or convert Docker images you have some restrictions:

          • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
          "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          ::: prompt :::

          Create a job script like:

          Create an example myscript.sh:

          ::: code bash #!/bin/bash

          # prime factors factor 1234567 :::

          "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          ::: prompt :::

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before singularity execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          ::: prompt :::

          For example to compile an MPI example:

          ::: prompt :::

          Example MPI job script:

          "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

          The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

          As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

          In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

          In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

          • Title and nickname
          • Start and end date for your course or training
          • VSC-ids of all teachers/trainers
          • Participants based on UGent Course Code and/or list of VSC-ids
          • Optional information
            • Additional storage requirements
              • Shared folder
              • Groups folder for collaboration
              • Quota
            • Reservation for resource requirements beyond the interactive cluster
            • Ticket number for specific software needed for your course/training
            • Details for a custom Interactive Application in the webportal

          In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

          Please make these requests well in advance, several weeks before the start of your course/workshop.

          "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

          The title of the course or training can be used in e.g. reporting.

          The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

          When choosing the nickname, try to make it unique, but this is not enforced nor checked.

          "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

          The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

          The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

          • Course group and subgroups will be deactivated
          • Residual data in the course directories will be archived or deleted
          • Custom Interactive Applications will be disabled
          "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

          A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

          This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

          Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

          "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

          The management of the list of students or participants depends if this is a UGent course or a training/workshop.

          "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

          Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

          The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

          Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

          A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

          "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

          (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

          "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

          For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

          This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

          Every course directory will always contain the folders:

          • input
            • ideally suited to distribute input data such as common datasets
            • moderators have read/write access
            • group members (students) only have read access
          • members
            • this directory contains a personal folder for every student in your course members/vsc<01234>
            • only this specific VSC-id will have read/write access to this folder
            • moderators have read access to this folder
          "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

          Optionally, we can also create these folders:

          • shared
            • this is a folder for sharing files between any and all group members
            • all group members and moderators have read/write access
            • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
          • groups
            • a number of groups/group_<01> folders are created under the groups folder
            • these folders are suitable if you want to let your students collaborate closely in smaller groups
            • each of these group_<01> folders are owned by a dedicated group
            • teachers are automatically made moderators of these dedicated groups
            • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
            • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

          If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

          • shared: yes
          • subgroups: <number of (sub)groups>
          "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

          There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

          • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
          • member quota (defaults 5 GB volume and 10k files) are per student/participant

          The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

          "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

          The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

          "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

          We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

          Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

          Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

          Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

          "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

          In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

          We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

          Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

          "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

          HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

          A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

          If you would like this for your course, provide more details in your teaching request, including:

          • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

          • which cluster you want to use

          • how many nodes/cores/GPUs are needed

          • which software modules you are loading

          • custom code you are launching (e.g. autostart a GUI)

          • required environment variables that you are setting

          • ...

          We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

          A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

          "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

          Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

          "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

          Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

          "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

          Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

          "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

          Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

          For example:

          $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

          "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

          Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

          Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

          The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

          example.sh:

          #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

          Running the following command:

          $ qsub --dryrun example.sh -N example\n

          will generate this output:

          Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

          With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

          Similarly to the --dryrun example, we start by running the following command:

          $ qsub --debug example.sh -N example\n

          which generates this output:

          DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
          The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

          "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

          Below is a list of the most common and useful directives.

          Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

          TORQUE-related environment variables in batch job scripts.

          # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

          IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

          When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

          Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

          Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

          "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

          When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

          To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

          Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

          Other reasons why using more cores may not lead to a (significant) speedup include:

          • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

          • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

          • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

          • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

          • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

          • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

          More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

          "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

          When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

          Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

          Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

          Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

          An example of how you can make beneficial use of multiple nodes can be found here.

          You can also use MPI in Python, some useful packages that are also available on the HPC are:

          • mpi4py
          • Boost.MPI

          We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

          "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

          If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

          If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

          "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

          If you get from your job output an error message similar to this:

          =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

          This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

          "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

          Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

          "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

          If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

          If you have errors that look like:

          vsc20167@login.hpc.uantwerpen.be: Permission denied\n

          or you are experiencing problems with connecting, here is a list of things to do that should help:

          1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

          2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

          3. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

          4. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

          5. Make sure you are using the private key (not the public key) when trying to connect: If you followed the manual, the private key filename should end in .ppk (not in .pub).

          6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

          7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

          If you are using PuTTY and get this error message:

          server unexpectedly closed network connection\n

          it is possible that the PuTTY version you are using is too old and doesn't support some required (security-related) features.

          Make sure you are using the latest PuTTY version if you are encountering problems connecting (see Get PuTTY). If that doesn't help, please contact hpc@uantwerpen.be.

          If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

          Please create a log file of your SSH session by following the steps in this article and include it in the email.

          "}, {"location": "troubleshooting/#change-putty-private-key-for-a-saved-configuration", "title": "Change PuTTY private key for a saved configuration", "text": "
          1. Open PuTTY

          2. Single click on the saved configuration

          3. Then click Load button

          4. Expand SSH category (on the left panel) clicking on the \"+\" next to SSH

          5. Click on Auth under the SSH category

          6. On the right panel, click Browse button

          7. Then search your private key on your computer (with the extension \".ppk\")

          8. Go back to the top of category, and click Session

          9. On the right panel, click on Save button

          "}, {"location": "troubleshooting/#check-whether-your-private-key-in-putty-matches-the-public-key-on-the-accountpage", "title": "Check whether your private key in PuTTY matches the public key on the accountpage", "text": "

          Follow the instructions in Change PuTTY private key for a saved configuration util item 5, then:

          1. Single click on the textbox containing the path to your private key, then select all text (push Ctrl + a ), then copy the location of the private key (push Ctrl + c)

          2. Open PuTTYgen

          3. Enter menu item \"File\" and select \"Load Private key\"

          4. On the \"Load private key\" popup, click in the textbox next to \"File name:\", then paste the location of your private key (push Ctrl + v), then click Open

          5. Make sure that your Public key from the \"Public key for pasting into OpenSSH authorized_keys file\" textbox is in your \"Public keys\" section on the accountpage https://account.vscentrum.be. (Scroll down to the bottom of \"View Account\" tab, you will find there the \"Public keys\" section)

          "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

          If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

          You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

          If the fingerprint matches, click \"Yes\".

          If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

          If you get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          or

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

          It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

          "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

          If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

          The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

          Make sure the fingerprint in the alert matches one of the following:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

          To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

          Note

          Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

          "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

          If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

          Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

          You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

          "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

          See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

          "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

          All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

          vsc20167@ln01[203] $\n

          When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

          Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

          Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

          $ echo This is a test\nThis is a test\n

          Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

          More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

          $ ls --help \n$ man ls\n$ info ls\n

          (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

          "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

          In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

          Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

          echo \"Hello! This is my hostname:\" \nhostname\n

          You can type both lines at your shell prompt, and the result will be the following:

          $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

          Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

          $ vi foo\n

          or use the following commands:

          echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

          The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

          $ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          Congratulations, you just created and started your first shell script!

          A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

          You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

          $ which bash\n/bin/bash\n

          We edit our script and change it with this information:

          #!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

          Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

          Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

          chmod +x foo\n

          Now you can start your script by simply executing it:

          $ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          The same technique can be used for all other scripting languages, like Perl and Python.

          Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

          "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

          The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

          Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

          To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

          Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

          Through this web portal, you can:

          • browse through the files & directories in your VSC account, and inspect, manage or change them;

          • consult active jobs (across all HPC-UGent Tier-2 clusters);

          • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

          • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

          • open a terminal session directly in your web browser;

          More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

          "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

          All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

          "}, {"location": "web_portal/#login", "title": "Login", "text": "

          When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

          "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

          The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

          Please click \"Authorize\" here.

          This request will only be made once, you should not see this again afterwards.

          "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

          Once logged in, you should see this start page:

          This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

          If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

          "}, {"location": "web_portal/#features", "title": "Features", "text": "

          We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

          "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

          Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

          The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

          Here you can:

          • Click a directory in the tree view on the left to open it;

          • Use the buttons on the top to:

            • go to a specific subdirectory by typing in the path (via Go To...);

            • open the current directory in a terminal (shell) session (via Open in Terminal);

            • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

            • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

            • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

            • show the owner and permissions in the file listing (via Show Owner/Mode);

          • Double-click a directory in the file listing to open that directory;

          • Select one or more files and/or directories in the file listing, and:

            • use the View button to see the contents (use the button at the top right to close the resulting popup window);

            • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

            • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

            • use the Download button to download the selected files and directories from your VSC account to your local workstation;

            • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

            • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

            • use the Delete button to (permanently!) remove the selected files and directories;

          For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

          "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

          Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

          For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

          "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

          To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

          A new browser tab will be opened that shows all your current queued and/or running jobs:

          You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

          Jobs that are still queued or running can be deleted using the red button on the right.

          Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

          For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

          "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

          To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

          This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

          You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

          Don't forget to actually submit your job to the system via the green Submit button!

          "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

          In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

          "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

          Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

          Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

          To exit the shell session, type exit followed by Enter and then close the browser tab.

          Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

          "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

          To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

          You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

          Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

          To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

          "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

          See dedicated page on Jupyter notebooks

          "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

          In case of problems with the web portal, it could help to restart the web server running in your VSC account.

          You can do this via the Restart Web Server button under the Help menu item:

          Of course, this only affects your own web portal session (not those of others).

          "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
          • ABAQUS for CAE course
          "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

          X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

          1. A graphical remote desktop that works well over low bandwidth connections.

          2. Copy/paste support from client to server and vice-versa.

          3. File sharing from client to server.

          4. Support for sound.

          5. Printer sharing from client to server.

          6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

          "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

          X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

          X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

          "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

          After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

          There are two ways to connect to the login node:

          • Option A: A direct connection to \"login.hpc.uantwerpen.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

          • Option B: You can use the node \"login.hpc.uantwerpen.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

          "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

          This is the easier way to setup X2Go, a direct connection to the login node.

          1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

          2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

          3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

          4. Set the SSH port (22 by default).

          5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

            1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

              You should look for your private SSH key generated by puttygen exported in \"OpenSSH\" format in Generating a public/private key pair (by default \"id_rsa\" (and not the \".ppk\" version)). Choose that file and click on open .

          6. Check \"Try autologin\" option.

          7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

            1. [optional]: Set a single application like Terminal instead of XFCE desktop. This option is much better than PuTTY because the X2Go client includes copy-pasting support.

          8. [optional]: Change the session icon.

          9. Click the OK button after these changes.

          "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

          This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

          1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

          2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

          3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

            1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

            2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

            3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

            4. Click the OK button after these changes.

          "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

          Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

          X2Go will keep the session open for you (but only if the login node is not rebooted).

          "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

          If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

          hostname\n

          This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

          "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

          If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

          "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

          The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

          To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

          Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

          After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

          Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

          "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

          TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

          Loads MNIST datasets and trains a neural network to recognize hand-written digits.

          Runtime: ~1 min. on 8 cores (Intel Skylake)

          See https://www.tensorflow.org/tutorials/quickstart/beginner

          "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

          Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

          These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

          The guide aims to make you familiar with the Linux command line environment quickly.

          The tutorial goes through the following steps:

          1. Getting Started
          2. Navigating
          3. Manipulating files and directories
          4. Uploading files
          5. Beyond the basics

          Do not forget Common pitfalls, as this can save you some troubleshooting.

          "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
          • More on the HPC infrastructure.
          • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
          "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

          Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

          "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

          To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

          First, it's important to make a distinction between two different output channels:

          1. stdout: standard output channel, for regular output

          2. stderr: standard error channel, for errors and warnings

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

          > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

          $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

          >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

          $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

          < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

          One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

          $ find . -name .txt > files\n$ xargs grep banana < files\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

          To redirect the stderr output (warnings, messages), you can use 2>, just like >

          $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

          To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

          $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

          Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

          $ ls | wc -l\n    42\n

          A common pattern is to pipe the output of a command to less so you can examine or search the output:

          $ find . | less\n

          Or to look through your command history:

          $ history | less\n

          You can put multiple pipes in the same line. For example, which cp commands have we run?

          $ history | grep cp | less\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

          The shell will expand certain things, including:

          1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

          2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

          3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

          4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

          "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

          ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

          $ ps -fu $USER\n

          To see all the processes:

          $ ps -elf\n

          To see all the processes in a forest view, use:

          $ ps auxf\n

          The last two will spit out a lot of data, so get in the habit of piping it to less.

          pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

          pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

          "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

          ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

          $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

          Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

          $ kill -9 1234\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

          top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

          To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

          There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

          To exit top, use q (for 'quit').

          For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

          "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

          ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

          $ ulimit -a\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

          To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

          $ wc example.txt\n      90     468     3189   example.txt\n

          The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

          To only count the number of lines, use wc -l:

          $ wc -l example.txt\n      90    example.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

          grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

          $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

          grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

          "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

          cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

          $ cut -f 1 -d ',' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

          sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

          $ sed 's/oldtext/newtext/g' myfile.txt\n

          By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

          "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

          awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

          First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

          $ awk '{print $4}' mydata.dat\n

          You can use -F ':' to change the delimiter (F for field separator).

          The next example is used to sum numbers from a field:

          $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

          The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

          However, there are some rules you need to abide by.

          Here is a very detailed guide should you need more information.

          "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

          The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

          #!/bin/sh\n
          #!/bin/bash\n
          #!/usr/bin/env bash\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

          Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

          if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
          Or only if a certain variable is bigger than one:
          if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
          Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

          In the initial example, we used -d to test if a directory existed. There are several more checks.

          Another useful example, is to test if a variable contains a value (so it's not empty):

          if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

          the -z will check if the length of the variable's value is greater than zero.

          "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

          Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

          Let's look at a simple example:

          for i in 1 2 3\ndo\necho $i\ndone\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

          Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

          CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

          In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

          "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

          Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

          Firstly a useful thing to know for debugging and testing is that you can run any command like this:

          command 2>&1 output.log   # one single output file, both output and errors\n

          If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

          If you want regular and error output separated you can use:

          command > output.log 2> output.err  # errors in a separate file\n

          this will write regular output to output.log and error output to output.err.

          You can then look for the errors with less or search for specific text with grep.

          In scripts, you can use:

          set -e\n

          This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

          "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

          Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

          command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

          If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

          Examples include:

          • modifying your $PS1 (to tweak your shell prompt)

          • printing information about the current/jobs environment (echoing environment variables, etc.)

          • selecting a specific cluster to run on with module swap cluster/...

          Some recommendations:

          • Avoid using module load statements in your $HOME/.bashrc file

          • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

          When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

          "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
          "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

          The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

          This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

          #PBS -l nodes=1:ppn=1 # single-core\n

          For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

          #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

          We intend to submit it on the long queue:

          #PBS -q long\n

          We request a total running time of 48 hours (2 days).

          #PBS -l walltime=48:00:00\n

          We specify a desired name of our job:

          #PBS -N FreeSurfer_per_subject-time-longitudinal\n
          This specifies mail options:
          #PBS -m abe\n

          1. a means mail is sent when the job is aborted.

          2. b means mail is sent when the job begins.

          3. e means mail is sent when the job ends.

          Joins error output with regular output:

          #PBS -j oe\n

          All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
          1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

          2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

          3. How many files and directories are in /tmp?

          4. What's the name of the 5th file/directory in alphabetical order in /tmp?

          5. List all files that start with t in /tmp.

          6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

          7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

          "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

          This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

          "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

          If you receive an error message which contains something like the following:

          No such file or directory\n

          It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

          Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

          "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

          Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

          $ cat some file\nNo such file or directory 'some'\n

          Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

          $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

          This is especially error-prone if you are piping results of find:

          $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

          This can be worked around using the -print0 flag:

          $ find . -type f -print0 | xargs -0 cat\n...\n

          But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

          "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

          If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

          $ rm -r ~/$PROJETC/*\n

          "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

          A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

          $ #rm -r ~/$POROJETC/*\n
          Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

          "}, {"location": "linux-tutorial/common_pitfalls/#copying-files-with-winscp", "title": "Copying files with WinSCP", "text": "

          After copying files from a windows machine, a file might look funny when looking at it on the cluster.

          $ cat script.sh\n#!/bin/bash^M\n#PBS -l nodes^M\n...\n

          Or you can get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          See section dos2unix to fix these errors with dos2unix.

          "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
          $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

          Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

          $ chmod +x script_name.sh\n

          "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

          If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

          If you need help about a certain command, you should consult its so-called \"man page\":

          $ man command\n

          This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

          Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

          "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
          1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

          2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

          3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

          4. basic shell usage

          5. Bash for beginners

          6. MOOC

          Please don't hesitate to contact in case of questions or problems.

          "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

          To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

          You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

          Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

          "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

          To get help:

          1. use the documentation available on the system, through the help, info and man commands (use q to exit).
            help cd \ninfo ls \nman cp \n
          2. use Google

          3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

          "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

          Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

          "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

          The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

          You use the shell by executing commands, and hitting <enter>. For example:

          $ echo hello \nhello \n

          You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

          To go through previous commands, use <up> and <down>, rather than retyping them.

          "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

          A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

          $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

          "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

          If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

          "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

          At the prompt we also have access to shell variables, which have both a name and a value.

          They can be thought of as placeholders for things we need to remember.

          For example, to print the path to your home directory, we can use the shell variable named HOME:

          $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

          This prints the value of this variable.

          "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

          There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

          For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

          $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

          You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

          $ env | sort | grep VSC\n

          But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

          $ export MYVARIABLE=\"value\"\n

          It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

          If we then do

          $ echo $MYVARIABLE\n

          this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

          "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

          You can change what your prompt looks like by redefining the special-purpose variable $PS1.

          For example: to include the current location in your prompt:

          $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

          Note that ~ is short representation of your home directory.

          To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

          $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

          "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

          One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

          This may lead to surprising results, for example:

          $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

          To understand what's going on here, see the section on cd below.

          The moral here is: be very careful to not use empty variables unintentionally.

          Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

          The -e option will result in the script getting stopped if any command fails.

          The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

          More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

          "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

          If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

          "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

          Basic information about the system you are logged into can be obtained in a variety of ways.

          We limit ourselves to determining the hostname:

          $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

          And querying some basic information about the Linux kernel:

          $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

          "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
          • Print the full path to your home directory
          • Determine the name of the environment variable to your personal scratch directory
          • What's the name of the system you\\'re logged into? Is it the same for everyone?
          • Figure out how to print the value of a variable without including a newline
          • How do you get help on using the man command?

          Next chapter teaches you on how to navigate.

          "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

          Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

          Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

          To figure out where your quota is being spent, the du (isk sage) command can come in useful:

          $ du -sh test\n59M test\n

          Do not (frequently) run du on directories where large amounts of data are stored, since that will:

          1. take a long time

          2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

          Software is provided through so-called environment modules.

          The most commonly used commands are:

          1. module avail: show all available modules

          2. module avail <software name>: show available modules for a specific software name

          3. module list: show list of loaded modules

          4. module load <module name>: load a particular module

          More information is available in section Modules.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

          The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

          Detailed information is available in section submitting your job.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

          Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

          Hint: python -c \"print(sum(range(1, 101)))\"

          • How many modules are available for Python version 3.6.4?
          • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
          • Which cluster modules are available?

          • What's the full path to your personal home/data/scratch directories?

          • Determine how large your personal directories are.
          • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

          Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

          To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

          $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

          To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
          $ cp source target\n

          This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

          $ cp -r sourceDirectory target\n

          A last more complicated example:

          $ cp -a sourceDirectory target\n

          Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
          $ mkdir directory\n

          which will create a directory with the given name inside the current directory.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
          $ mv source target\n

          mv will move the source path to the destination path. Works for both directories as files.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

          Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

          $ rm filename\n
          rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

          You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

          $ rmdir directory\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

          Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

          1. User - a particular user (account)

          2. Group - a particular group of users (may be user-specific group with only one member)

          3. Other - other users in the system

          The permission types are:

          1. Read - For files, this gives permission to read the contents of a file

          2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

          3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

          Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

          $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

          The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

          Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

          $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

          You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

          You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

          However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

          $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          This will give the user otheruser permissions to write to Project_GoldenDragon

          Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

          Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

          See https://linux.die.net/man/1/setfacl for more information.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

          Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

          $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

          Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

          $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

          Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

          $ unzip myfile.zip\n

          If we would like to make our own zip archive, we use zip:

          $ zip myfiles.zip myfile1 myfile2 myfile3\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

          Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

          You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

          $ tar -xf tarfile.tar\n

          Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

          $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

          Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

          # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

          If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

          $ tar -c source1 source2 source3 -f tarfile.tar\n
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
          1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

          2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

          3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

          4. Remove the another/test directory with a single command.

          5. Rename test to test2. Move test2/hostname.txt to your home directory.

          6. Change the permission of test2 so only you can access it.

          7. Create an empty job script named job.sh, and make it executable.

          8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

          The next chapter is on uploading files, especially important when using HPC-infrastructure.

          "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

          This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

          "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

          To print the current directory, use pwd or \\$PWD:

          $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

          "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

          A very basic and commonly used command is ls, which can be used to list files and directories.

          In its basic usage, it just prints the names of files and directories in the current directory. For example:

          $ ls\nafile.txt some_directory \n

          When provided an argument, it can be used to list the contents of a directory:

          $ ls some_directory \none.txt two.txt\n

          A couple of commonly used options include:

          • detailed listing using ls -l:

            $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • To print the size information in human-readable form, use the -h flag:

            $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • also listing hidden files using the -a flag:

            $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • ordering files by the most recent change using -rt:

            $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

          If you try to use ls on a file that doesn't exist, you will get a clear error message:

          $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
          "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

          To change to a different directory, you can use the cd command:

          $ cd some_directory\n

          To change back to the previous directory you were in, there's a shortcut: cd -

          Using cd without an argument results in returning back to your home directory:

          $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

          "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

          The file command can be used to inspect what type of file you're dealing with:

          $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
          "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

          An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

          Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

          A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

          Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

          There are two special relative paths worth mentioning:

          • . is a shorthand for the current directory
          • .. is a shorthand for the parent of the current directory

          You can also use .. when constructing relative paths, for example:

          $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
          "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

          Each file and directory has particular permissions set on it, which can be queried using ls -l.

          For example:

          $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

          The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

          1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
          2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
          3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
          4. the 3rd part r-- indicates that other users only have read permissions

          The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

          1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
          2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

          See also the chmod command later in this manual.

          "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

          find will crawl a series of directories and lists files matching given criteria.

          For example, to look for the file named one.txt:

          $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

          To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

          $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

          A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

          "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
          • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
          • When was your home directory created or last changed?
          • Determine the name of the last changed file in /tmp.
          • See how home directories are organised. Can you access the home directory of other users?

          The next chapter will teach you how to interact with files and directories.

          "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

          To transfer files from and to the HPC, see the section about transferring files of the HPC manual

          "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

          After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

          For example, you may see an error when submitting a job script that was edited on Windows:

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

          To fix this problem, you should run the dos2unix command on the file:

          $ dos2unix filename\n
          "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

          As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop and they look like directories in WinSCP) pointing to the respective storages:

          $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
          "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

          Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

          1. Open (\"Read\"): ^R

          2. Save (\"Write Out\"): ^O

          3. Exit: ^X

          More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

          "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

          rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

          You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

          For example, to copy a folder with lots of CSV files:

          $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

          will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

          The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

          To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

          To copy files to your local computer, you can also use rsync:

          $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
          This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

          See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

          "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
          1. Download the file /etc/hostname to your local computer.

          2. Upload a file to a subdirectory of your personal $VSC_DATA space.

          3. Create a file named hello.txt and edit it using nano.

          Now you have a basic understanding, see next chapter for some more in depth concepts.

          "}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

          (more info soon)

          "}]} \ No newline at end of file diff --git a/HPC/Antwerpen/Windows/sitemap.xml.gz b/HPC/Antwerpen/Windows/sitemap.xml.gz index 9ea6b0f3205..21f0c132cd4 100644 Binary files a/HPC/Antwerpen/Windows/sitemap.xml.gz and b/HPC/Antwerpen/Windows/sitemap.xml.gz differ diff --git a/HPC/Antwerpen/Windows/useful_linux_commands/index.html b/HPC/Antwerpen/Windows/useful_linux_commands/index.html index 7f567fcd7b1..1dc96a66171 100644 --- a/HPC/Antwerpen/Windows/useful_linux_commands/index.html +++ b/HPC/Antwerpen/Windows/useful_linux_commands/index.html @@ -1284,7 +1284,7 @@

          How to get started with shell scr
          $ vi foo
           

          or use the following commands:

          -
          echo "echo Hello! This is my hostname:" > foo
          +
          echo "echo 'Hello! This is my hostname:'" > foo
           echo hostname >> foo
           

          The easiest ways to run a script is by starting the interpreter and pass @@ -1309,7 +1309,9 @@

          How to get started with shell scr /bin/bash

          We edit our script and change it with this information:

          -
          #!/bin/bash echo \"Hello! This is my hostname:\" hostname
          +
          #!/bin/bash
          +echo "Hello! This is my hostname:"
          +hostname
           

          Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Antwerpen/macOS/search/search_index.json b/HPC/Antwerpen/macOS/search/search_index.json index ea7fef08e3e..c40e416f87c 100644 --- a/HPC/Antwerpen/macOS/search/search_index.json +++ b/HPC/Antwerpen/macOS/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

          Use the menu on the left to navigate, or use the search box on the top right.

          You are viewing documentation intended for people using macOS.

          Use the OS dropdown in the top bar to switch to a different operating system.

          Quick links

          • Getting Started | Getting Access
          • FAQ | Troubleshooting | Best practices | Known issues

          If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

          If you still have any questions, you can contact the UAntwerpen-HPC.

          "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

          An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

          See also: Running batch jobs.

          "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

          When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

          Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

          Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

          "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

          Modules each come with a suffix that describes the toolchain used to install them.

          Examples:

          • AlphaFold/2.2.2-foss-2021a

          • tqdm/4.61.2-GCCcore-10.3.0

          • Python/3.9.5-GCCcore-10.3.0

          • matplotlib/3.4.2-foss-2021a

          Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

          The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          You can use module avail [search_text] to see which versions on which toolchains are available to use.

          It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

          "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

          When incompatible modules are loaded, you might encounter an error like this:

          {{ lmod_error }}\n

          You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

          Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

          An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          See also: How do I choose the job modules?

          "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

          The 72 hour walltime limit will not be extended. However, you can work around this barrier:

          • Check that all available resources are being used. See also:
            • How many cores/nodes should I request?.
            • My job is slow.
            • My job isn't using any GPUs.
          • Use a faster cluster.
          • Divide the job into more parallel processes.
          • Divide the job into shorter processes, which you can submit as separate jobs.
          • Use the built-in checkpointing of your software.
          "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

          Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

          When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

          Try requesting a bit more memory than your proportional share, and see if that solves the issue.

          See also: Specifying memory requirements.

          "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

          When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

          It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

          See also: Running interactive jobs.

          "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

          Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

          Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

          "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

          There are a few possible causes why a job can perform worse than expected.

          Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

          Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

          Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

          "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

          Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

          To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

          See also: Multi core jobs/Parallel Computing and Mympirun.

          "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

          For example, we have a simple script (./hello.sh):

          #!/bin/bash \necho \"hello world\"\n

          And we run it like mympirun ./hello.sh --output output.txt.

          To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

          mympirun --output output.txt ./hello.sh\n
          "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

          In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

          "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

          When trying to create files, errors like this can occur:

          No space left on device\n

          The error \"No space left on device\" can mean two different things:

          • all available storage quota on the file system in question has been used;
          • the inode limit has been reached on that file system.

          An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

          Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

          If the problem persists, feel free to contact support.

          "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

          NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

          See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

          "}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

          Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

          $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

          For more information about chmod or setfacl, see Linux tutorial.

          "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

          Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

          "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

          Please send an e-mail to hpc@uantwerpen.be that includes:

          • What software you want to install and the required version

          • Detailed installation instructions

          • The purpose for which you want to install the software

          If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

          "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

          On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

          MacOS & Linux (on Windows, only the second part is shown):

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

          Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

          "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

          A Virtual Organisation consists of a number of members and moderators. A moderator can:

          • Manage the VO members (but can't access/remove their data on the system).

          • See how much storage each member has used, and set limits per member.

          • Request additional storage for the VO.

          One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

          See also: Virtual Organisations.

          "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

          Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

          du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

          The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

          The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

          "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

          By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

          You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

          "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

          When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

          sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

          A lot of tasks can be performed without sudo, including installing software in your own account.

          Installing software

          • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
          • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
          "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

          Who can I contact?

          • General questions regarding HPC-UGent and VSC: hpc@ugent.be

          • HPC-UGent Tier-2: hpc@ugent.be

          • VSC Tier-1 compute: compute@vscentrum.be

          • VSC Tier-1 cloud: cloud@vscentrum.be

          "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

          Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

          "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

          The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

          "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

          Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

          module load hod\n
          "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

          The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

          As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

          For example, this will work as expected:

          $ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

          Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

          "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

          The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

          $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

          By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

          If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

          Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

          "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

          After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

          These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

          You should occasionally clean this up using hod clean:

          $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
          Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

          "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

          If you have any questions, or are experiencing problems using HOD, you have a couple of options:

          • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

          • Contact the UAntwerpen-HPC via hpc@uantwerpen.be

          • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

          "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

          Note

          To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

          Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

          "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

          The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

          Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

          "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

          Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

          To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

          $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

          After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

          To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

          First, we copy the magicsquare.m example that comes with MATLAB to example.m:

          cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

          To compile a MATLAB program, use mcc -mv:

          mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
          "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

          To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

          It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

          For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

          "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

          If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

          export _JAVA_OPTIONS=\"-Xmx64M\"\n

          The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

          Another possible issue is that the heap size is too small. This could result in errors like:

          Error: Out of memory\n

          A possible solution to this is by setting the maximum heap size to be bigger:

          export _JAVA_OPTIONS=\"-Xmx512M\"\n
          "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

          MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

          The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

          You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

          parpool.m
          % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

          See also the parpool documentation.

          "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

          Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

          MATLAB_LOG_DIR=<OUTPUT_DIR>\n

          where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

          # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

          You should remove the directory at the end of your job script:

          rm -rf $MATLAB_LOG_DIR\n
          "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

          When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

          The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

          export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

          So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

          "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

          All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

          jobscript.sh
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
          "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

          Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

          Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

          "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

          First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

          $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

          When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

          Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

          It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

          "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

          You can get a list of running VNC servers on a node with

          $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

          This only displays the running VNC servers on the login node you run the command on.

          To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

          $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

          This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

          "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

          The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

          In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

          Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

          To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

          The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

          "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

          The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

          The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

          So, in our running example, both the source and destination ports are 5906.

          "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

          In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

          If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

          In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

          To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

          Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

          In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

          We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

          "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

          First, we will set up the SSH tunnel from our workstation to .

          Use the settings specified in the sections above:

          • source port: the port on which the VNC server is running (see Determining the source/destination port);

          • destination host: localhost;

          • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

          Execute the following command to set up the SSH tunnel.

          ssh -L 5906:localhost:12345  vsc20167@login.hpc.uantwerpen.be\n

          Replace the source port 5906, destination port 12345 and user ID vsc20167 with your own!

          With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

          Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

          "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

          Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

          You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

          netstat -an | grep -i listen | grep tcp | grep 12345\n

          If you see no matching lines, then the port you picked is still available, and you can continue.

          If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

          $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
          "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

          In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

          To do this, run the following command:

          $ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

          With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

          Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

          **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

          As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

          "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

          You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file ending in TurboVNC64-2.1.2.dmg (the version number can be different) and execute it.

          Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

          When prompted for a password, use the password you used to setup the VNC server.

          When prompted for default or empty panel, choose default.

          If you have an empty panel, you can reset your settings with the following commands:

          xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
          "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

          The VNC server can be killed by running

          vncserver -kill :6\n

          where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

          "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

          You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

          "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

          All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

          See HPC policies for more information on who is entitled to an account.

          The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

          There are two methods for connecting to UAntwerpen-HPC:

          • Using a terminal to connect via SSH.
          • Using the web portal

          The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

          If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

          The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

          "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
          • an SSH public/private key pair can be seen as a lock and a key

          • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

          • the SSH private key is like a physical key: you don't hand it out to other people.

          • anyone who has the key (and the optional password) can unlock the door and log in to the account.

          • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

          Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). To open a Terminal window in macOS, open the Finder and choose

          >> Applications > Utilities > Terminal

          Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on macOS is using the OpenSSH client included with macOS, which you can then also use to log on to the clusters.

          "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

          Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

          \"Secure\" means that:

          1. the User is authenticated to the System; and

          2. the System is authenticated to the User; and

          3. all data is encrypted during transfer.

          OpenSSH is a FREE implementation of the SSH connectivity protocol. macOS comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

          $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

          To access the clusters and transfer your files, you will use the following commands:

          1. ssh-keygen: to generate the SSH key pair (public + private key);

          2. ssh: to open a shell on a remote machine;

          3. sftp: a secure equivalent of ftp;

          4. scp: a secure equivalent of the remote copy command rcp.

          "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

          A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

          ls ~/.ssh\n

          If a key-pair is already available, you would normally get:

          authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

          Otherwise, the command will show:

          ls: .ssh: No such file or directory\n

          You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

          You will need to generate a new key pair, when:

          1. you don't have a key pair yet

          2. you forgot the passphrase protecting your private key

          3. your private key was compromised

          4. your key pair is too short or not the right type

          For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

          ssh-keygen -t rsa -b 4096\n

          This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

          Without your key pair, you won't be able to apply for a personal VSC account.

          "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

          Most recent Unix derivatives include by default an SSH agent to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

          Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

          ssh-add\n

          Tip

          Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

          Check that your key is available from the keyring with:

          ssh-add -l\n

          After these changes the key agent will keep your SSH key to connect to the clusters as usual.

          Tip

          You should execute ssh-add command again if you generate a new SSH key.

          "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

          Visit https://account.vscentrum.be/

          You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

          Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

          Click Confirm

          You will now be taken to the authentication page of your institute.

          The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

          "}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

          All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

          Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

          After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

          This file has been stored in the directory \"~/.ssh/\".

          Tip

          As \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing Cmd+Shift+G (or Cmd+Shift+.), which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"~/.ssh\" and press enter.

          After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

          "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

          Within one day, you should receive a Welcome e-mail with your VSC account details.

          Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

          Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

          "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

          In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

          1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

          2. Go to https://account.vscentrum.be/django/account/edit

          3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

          4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

          5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

          "}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

          A typical Computation workflow will be:

          1. Connect to the UAntwerpen-HPC

          2. Transfer your files to the UAntwerpen-HPC

          3. Compile your code and test it

          4. Create a job script

          5. Submit your job

          6. Wait while

            1. your job gets into the queue

            2. your job gets executed

            3. your job finishes

          7. Move your results

          We'll take you through the different tasks one by one in the following chapters.

          "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

          AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

          See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

          "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

          This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

          • AlphaFold website: https://alphafold.com/
          • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
          • AlphaFold FAQ: https://alphafold.com/faq
          • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
          • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
          • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
            • recording available on YouTube
            • slides available here (PDF)
            • see also https://www.vscentrum.be/alphafold
          "}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

          Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

          $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

          To use AlphaFold, you should load a particular module, for example:

          module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

          We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

          Warning

          When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

          Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

          $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

          The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

          As of writing this documentation the latest version is 20230310.

          Info

          The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

          The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

          "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

          The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

          export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

          Use newest version

          Do not forget to replace 20230310 with a more up to date version if available.

          "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

          AlphaFold provides a script called run_alphafold.py

          A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

          The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

          Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

          For more information about the script and options see this section in the official README.

          READ README

          It is strongly advised to read the official README provided by DeepMind before continuing.

          "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

          The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

          Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

          Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

          Info

          Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

          "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

          The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

          Using --db_preset=full_dbs, the following runtime data was collected:

          • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
          • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
          • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
          • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

          This highlights a couple of important attention points:

          • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
          • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
          • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

          With --db_preset=casp14, it is clearly more demanding:

          • On doduo, with 24 cores (1 node): still running after 48h...
          • On joltik, 1 V100 GPU + 8 cores: 4h 48min

          This highlights the difference between CPU and GPU performance even more.

          "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

          The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

          Do not forget to set up the environment (see above: Setting up the environment).

          "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

          Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

          >sequence_name\n<SEQUENCE>\n

          Then run the following command in the same directory:

          alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

          See AlphaFold output, for information about the outputs.

          Info

          For more scenarios see the example section in the official README.

          "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

          The following two example job scripts can be used as a starting point for running AlphaFold.

          The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

          To run the job scripts you need to create a file named T1050.fasta with the following content:

          >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
          source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

          "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

          Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

          Swap to the joltik GPU before submitting it:

          module swap cluster/joltik\n
          AlphaFold-gpu-joltik.sh
          #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
          "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

          Jobscript that runs AlphaFold on CPU using 24 cores on one node.

          AlphaFold-cpu-doduo.sh
          #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

          In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

          "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

          Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

          This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

          "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

          The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

          "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

          Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

          Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

          # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
          "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

          Create a job script like:

          #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

          Create an example myscript.sh:

          #!/bin/bash\n\n# prime factors\nfactor 1234567\n
          "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
          #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before apptainer execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

          For example to compile an MPI example:

          module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

          Example MPI job script:

          #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
          "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
          1. Before starting, you should always check:

            • Are there any errors in the script?

            • Are the required modules loaded?

            • Is the correct executable used?

          2. Check your computer requirements upfront, and request the correct resources in your batch job script.

            • Number of requested cores

            • Amount of requested memory

            • Requested network type

          3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

          4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

          5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

          6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

          7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

          8. Submit your job and wait (be patient) ...

          9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

          10. The runtime is limited by the maximum walltime of the queues.

          11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

          12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

          13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

          "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

          All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

          Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

          "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

          In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

          "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

          To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

          In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

          In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

          Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

          Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

          Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

          "}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

          Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

          All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

          A typical process looks like:

          1. Copy your software to the login-node of the UAntwerpen-HPC

          2. Start an interactive session on a compute node;

          3. Compile it;

          4. Test it locally;

          5. Generate your job scripts;

          6. Test it on the UAntwerpen-HPC

          7. Run it (in parallel);

          We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

          $ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
          "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

          Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

          We now list the directory and explore the contents of the \"hello.c\" program:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

          hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

          The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

          We first need to compile this C-file into an executable with the gcc-compiler.

          First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

          $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

          A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

          Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

          $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

          It seems to work, now run it on the UAntwerpen-HPC

          qsub hello.pbs\n

          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          List the directory and explore the contents of the \"mpihello.c\" program:

          $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

          mpihello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

          The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

          Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

          mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

          A new file \"hello\" has been created. Note that this program has \"execute\" rights.

          Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the UAntwerpen-HPC.

          qsub mpihello.pbs\n
          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

          We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

          module purge\nmodule load intel\n

          Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

          mpiicc -o mpihello mpihello.c\nls -l\n

          Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the UAntwerpen-HPC.

          qsub mpihello.pbs\n

          Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

          Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

          Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

          Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

          1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

          2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

          3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

          4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

          "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

          Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

          VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

          All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

          • Use an VPN connection to connect to University of Antwerp the network (recommended).

          • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your University of Antwerp account.

            • While this web connection is active new SSH sessions can be started.

            • Active SSH sessions will remain active even when this web page is closed.

          • Contact your HPC support team (via hpc@uantwerpen.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

          Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

          ssh_exchange_identification: read: Connection reset by peer\n
          "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

          The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

          If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

          "}, {"location": "connecting/#connect", "title": "Connect", "text": "

          Open up a terminal and enter the following command to connect to the UAntwerpen-HPC. You can open a terminal by navigation to Applications and then Utilities in the finder and open Terminal.app, or enter Terminal in Spotlight Search.

          ssh vsc20167@login.hpc.uantwerpen.be\n

          Here, user vsc20167 wants to make a connection to the \"Leibniz\" cluster at University of Antwerp via the login node \"login.hpc.uantwerpen.be\", so replace vsc20167 with your own VSC id in the above command.

          The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

          A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

          Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          In this case, use the -i option for the ssh command to specify the location of your private key. For example:

          ssh -i /home/example/my_keys\n

          Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

          $ pwd\n/user/antwerpen/201/vsc20167\n

          Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

          $ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

          This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

          You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

          As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

          $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

          This directory contains:

          1. This HPC Tutorial (in either a Mac, Linux or Windows version).

          2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

          cd examples\n

          Tip

          Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

          Tip

          For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

          The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
          Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

          Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

          You can exit the connection at anytime by entering:

          $ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

          tip: Setting your Language right

          You may encounter a warning message similar to the following one during connecting:

          perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
          or any other error message complaining about the locale.

          This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

          LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

          A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier. Open the .bashrc on your local machine with your favourite editor and add the following lines:

          $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

          tip: vi

          To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

          or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

          echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

          You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

          "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

          Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. macOS ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

          Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the UAntwerpen-HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

          It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

          Open an additional terminal window and check that you're working on your local machine.

          $ hostname\n<local-machine-name>\n

          If you're still using the terminal that is connected to the UAntwerpen-HPC, close the connection by typing \"exit\" in the terminal window.

          For example, we will copy the (local) file \"localfile.txt\" to your home directory on the UAntwerpen-HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc20167\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc20167@login.hpc.uantwerpen.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

          $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc20167@login.hpc.uantwerpen.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

          Connect to the UAntwerpen-HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

          $ pwd\n/user/antwerpen/201/vsc20167\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

          The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-macOS-Antwerpen.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

          First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

          $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc20167 Sep 11 09:53 intro-HPC-macOS-Antwerpen.pdf\n

          Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

          $ scp vsc20167@login.hpc.uantwerpen.be:./docs/intro-HPC-macOS-Antwerpen.pdf .\nintro-HPC-macOS-Antwerpen.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

          The file has been copied from the HPC to your local computer.

          It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

          scp -r dataset vsc20167@login.hpc.uantwerpen.be:scratch\n

          If you don't use the -r option to copy a directory, you will run into the following error:

          $ scp dataset vsc20167@login.hpc.uantwerpen.be:scratch\ndataset: not a regular file\n
          "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

          The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

          The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

          One easy way of starting a sftp session is

          sftp vsc20167@login.hpc.uantwerpen.be\n

          Typical and popular commands inside an sftp session are:

          cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the UAntwerpen-HPC remote machine) ls Get a list of the files in the current directory on the UAntwerpen-HPC. get fibo.py Copy the file \"fibo.py\" from the UAntwerpen-HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the UAntwerpen-HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the UAntwerpen-HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the UAntwerpen-HPC."}, {"location": "connecting/#using-a-gui-cyberduck", "title": "Using a GUI (Cyberduck)", "text": "

          Cyberduck is a graphical alternative to the scp command. It can be installed from https://cyberduck.io.

          This is the one-time setup you will need to do before connecting:

          1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the \"+\" sign on the bottom left of the window. A new window will open.

          2. In the drop-down menu on top, select \"SFTP (SSH File Transfer Protocol)\".

          3. In the \"Server\" field, type in login.hpc.uantwerpen.be. In the \"Username\" field, type in your VSC account id (this looks like vsc20167).

          4. Select the location of your SSH private key in the \"SSH Private Key\" field.

          5. Finally, type in a name for the bookmark in the \"Nickname\" field and close the window by pressing on the red circle in the top left corner of the window.

          To open the connection, click on the \"Bookmarks\" icon (which resembles an open book) and double-click on the bookmark you just created.

          "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

          See the section on rsync in chapter 5 of the Linux intro manual.

          "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

          It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

          For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

          ssh ln2.leibniz.uantwerpen.vsc\n
          This is also possible the other way around.

          If you want to find out which login host you are connected to, you can use the hostname command.

          $ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

          Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

          • screen
          • tmux
          "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

          It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

          In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

          Check if any cron script is already set in the current login node with:

          crontab -l\n

          At this point you can add/edit (with vi editor) any cron script running the command:

          crontab -e\n
          "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
           15 5 * * * ~/runscript.sh >& ~/job.out\n

          where runscript.sh has these lines in this example:

          runscript.sh
          #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

          In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

          Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

          ssh gligar07    # or gligar08\n
          "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

          You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

          EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

          "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

          For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

          • applying custom patches to the software that only you or your group are using

          • evaluating new software versions prior to requesting a central software installation

          • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

          "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

          Before you use EasyBuild, you need to configure it:

          "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

          This is where EasyBuild can find software sources:

          EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
          • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

          • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

          "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

          This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

          export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

          On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

          "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

          This is where EasyBuild will install the software (and accompanying modules) to.

          For example, to let it use $VSC_DATA/easybuild, use:

          export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

          Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

          Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

          To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

          "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

          Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

          module load EasyBuild\n
          "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

          EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

          $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

          For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

          eb example-1.2.1-foss-2024a.eb --robot\n
          "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

          To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

          To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

          eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

          To try to install example v1.2.5 with a different compiler toolchain:

          eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
          "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

          To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

          "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

          To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

          module use $EASYBUILD_INSTALLPATH/modules/all\n

          It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

          "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

          As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

          Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

          Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

          There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

          Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

          Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

          This chapter shows you how to measure:

          1. Walltime
          2. Memory usage
          3. CPU usage
          4. Disk (storage) needs
          5. Network bottlenecks

          First, we allocate a compute node and move to our relevant directory:

          qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

          One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

          The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

          Test the time command:

          $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

          It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

          It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

          The walltime can be specified in a job scripts as:

          #PBS -l walltime=3:00:00:00\n

          or on the command line

          qsub -l walltime=3:00:00:00\n

          It is recommended to always specify the walltime for a job.

          "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

          In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

          The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

          First compile the program on your machine and then test it for 1 GB:

          $ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
          "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

          The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

          $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

          Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

          It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

          "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

          The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

          To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

          $ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

          Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

          $ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

          Whereby:

          1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
          2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
          3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
          4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

          Monitor will write the CPU usage and memory consumption of simulation to standard error.

          This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

          $ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

          For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

          $ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

          Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

          The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

          $ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

          Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

          Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

          top

          provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

          htop

          is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

          "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

          Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

          Sequential or single-node applications:

          The maximum amount of physical memory used by the job can be specified in a job script as:

          #PBS -l mem=4gb\n

          or on the command line

          qsub -l mem=4gb\n

          This setting is ignored if the number of nodes is not\u00a01.

          Parallel or multi-node applications:

          When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

          For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

          #PBS -l pmem=2gb\n

          or on the command line

          $ qsub -l pmem=2gb\n

          (and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

          "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

          Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

          "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

          The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

          The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

          $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

          Or if you want to see it in a more readable format, execute:

          $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

          Note

          Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

          In order to specify the number of nodes and the number of processors per node in your job script, use:

          #PBS -l nodes=N:ppn=M\n

          or with equivalent parameters on the command line

          qsub -l nodes=N:ppn=M\n

          This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

          Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

          The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

          We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

          $ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

          And then start to monitor the eat_cpu program:

          $ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

          We notice that it the program keeps its CPU nicely busy at 100%.

          Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

          Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

          This could also be monitored with the htop command:

          htop\n
          Example output:
            1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

          The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

          If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
          2. Develop your parallel program in a smart way.
          3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          4. Correct your request for CPUs in your job script.
          "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

          On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

          The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

          The load averages differ from CPU percentage in two significant ways:

          1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
          2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
          "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

          What is the \"optimal load\" rule of thumb?

          The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

          In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

          Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

          The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

          1. When you are running computational intensive applications, one application per processor will generate the optimal load.
          2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

          The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

          The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

          The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

          The uptime command will show us the average load

          $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

          Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

          $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
          You can also read it in the htop command.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Profile your software to improve its performance.
          2. Configure your software (e.g., to exactly use the available amount of processors in a node).
          3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
          4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          5. Correct your request for CPUs in your job script.

          And then check again.

          "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

          Some programs generate intermediate or output files, the size of which may also be a useful metric.

          Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

          We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

          $ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

          The monitor tool provides an option (-f) to display the size of one or more files:

          $ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

          Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

          It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

          Several actions can be taken, to avoid storage problems:

          1. Be aware of all the files that are generated by your program. Also check out the hidden files.
          2. Check your quota consumption regularly.
          3. Clean up your files regularly.
          4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
          5. Make sure your programs clean up their temporary files after execution.
          6. Move your output results to your own computer regularly.
          7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
          "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

          Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

          Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

          The parameter to add in your job script would be:

          #PBS -l ib\n

          If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

          #PBS -l gbe\n
          "}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

          Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

          $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

          The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

          "}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

          Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

          When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

          It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

          $ monitor -p 18749\n

          Note that this feature can be (ab)used to monitor specific sub-processes.

          "}, {"location": "getting_started/", "title": "Getting Started", "text": "

          Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

          In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

          Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

          "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

          To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

          If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

          "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
          1. Connect to the login nodes
          2. Transfer your files to the UAntwerpen-HPC
          3. Optional: compile your code and test it
          4. Create a job script and submit your job
          5. Wait for job to be executed
          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

          "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

          There are two options to connect

          • Using a terminal to connect via SSH (for power users) (see First Time connection to the UAntwerpen-HPC)
          • Using the web portal

          Considering your operating system is macOS, it should be easy to make use of the ssh command in a terminal, but the web portal will work too.

          The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

          See shell access when using the web portal, or connection to the UAntwerpen-HPC when using a terminal.

          Make sure you can get to a shell access to the UAntwerpen-HPC before proceeding with the next steps.

          Info

          When having problems see the connection issues section on the troubleshooting page.

          "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

          Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

          Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

          On your local machine you can run:

          curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

          Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

          scp tensorflow_mnist.py run.sh vsc20167login.hpc.uantwerpen.be:~\n

          ssh  vsc20167@login.hpc.uantwerpen.be\n

          User your own VSC account id

          Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

          Info

          For more information about transfering files or scp, see tranfer files from/to hpc.

          When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

          $ ls ~\nrun.sh tensorflow_mnist.py\n

          When you do not see these files, make sure you uploaded the files to your home directory.

          "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

          Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

          A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

          Our job script looks like this:

          run.sh

          #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
          As you can see this job script will run the Python script named tensorflow_mnist.py.

          The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

          module swap cluster/{{ othercluster }}\n

          Tip

          When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

          $ qsub run.sh\n433253.leibniz\n

          This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

          Make sure you understand what the module command does

          Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

          It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

          When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

          For detailed information about module commands, read the running batch jobs chapter.

          "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

          Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

          You can get an overview of the active jobs using the qstat command:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

          Eventually, after entering qstat again you should see that your job has started running:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

          If you don't see your job in the output of the qstat command anymore, your job has likely completed.

          Read this section on how to interpret the output.

          "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

          When your job finishes it generates 2 output files:

          • One for normal output messages (stdout output channel).
          • One for warning and error messages (stderr output channel).

          By default located in the directory where you issued qsub.

          In our example when running ls in the current directory you should see 2 new files:

          • run.sh.o433253.leibniz, containing normal output messages produced by job 433253.leibniz;
          • run.sh.e433253.leibniz, containing errors and warnings produced by job 433253.leibniz.

          Info

          run.sh.e433253.leibniz should be empty (no errors or warnings).

          Use your own job ID

          Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

          When examining the contents of run.sh.o433253.leibniz you will see something like this:

          Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

          Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

          Warning

          When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

          For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

          "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
          • Running interactive jobs
          • Running jobs with input/output data
          • Multi core jobs/Parallel Computing
          • Interactive and debug cluster

          For more examples see Program examples and Job script examples

          "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

          module swap cluster/joltik\n

          To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

          module swap cluster/accelgor\n

          Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

          "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

          To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

          Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

          "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

          See https://www.ugent.be/hpc/en/infrastructure.

          "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

          There are 2 main ways to ask for GPUs as part of a job:

          • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

          • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

          Some background:

          • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

          • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

          "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

          Some important attention points:

          • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

          • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

          • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

          • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

          "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

          Use module avail to check for centrally installed software.

          The subsections below only cover a couple of installed software packages, more are available.

          "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

          Please consult module avail GROMACS for a list of installed versions.

          "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

          Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

          Please consult module avail Horovod for a list of installed versions.

          Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

          At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

          "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

          Please consult module avail PyTorch for a list of installed versions.

          "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

          Please consult module avail TensorFlow for a list of installed versions.

          Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

          "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
          #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
          "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

          Please consult module avail AlphaFold for a list of installed versions.

          For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

          "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

          In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

          "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

          The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

          This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

          Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

          • Interactive jobs (see chapter\u00a0Running interactive jobs)

          • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

          • Jobs requiring few resources

          • Debugging programs

          • Testing and debugging job scripts

          "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

          module swap cluster/donphan\n

          Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

          "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

          Some limits are in place for this cluster:

          • each user may have at most 5 jobs in the queue (both running and waiting to run);

          • at most 3 jobs per user can be running at the same time;

          • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

          In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

          Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

          "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

          Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

          All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

          "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

          \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

          While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

          A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

          The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

          Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

          Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

          "}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

          The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

          The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

          The UAntwerpen-HPC consists of:

          In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

          The UAntwerpen-HPC currently consists of:

          Leibniz:

          1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

          2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

          3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

          4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

          5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

          6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

          7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

          The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

          Vaughan:

          1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

          The nodes are connected using an InfiniBand HDR100 network.

          All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

          Two tools perform the Job management and job scheduling:

          1. TORQUE: a resource manager (based on PBS);

          2. Moab: job scheduler and management tools.

          For maintenance and monitoring, we use:

          1. Ganglia: monitoring software;

          2. Icinga and Nagios: alert manager.

          "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

          The HPC infrastructure is not a magic computer that automatically:

          1. runs your PC-applications much faster for bigger problems;

          2. develops your applications;

          3. solves your bugs;

          4. does your thinking;

          5. ...

          6. allows you to play games even faster.

          The UAntwerpen-HPC does not replace your desktop computer.

          "}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

          Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

          It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

          "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

          In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

          Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

          "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

          Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

          Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

          The two parallel programming paradigms most used in HPC are:

          • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

          • MPI for distributed memory systems (multiprocessing): on multiple nodes

          Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

          "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

          Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

          It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

          Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

          "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

          You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

          For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

          Supported and commonly used compilers are GCC, clang, J2EE and Intel

          Commonly used software packages are:

          • in bioinformatics: beagle, Beast, bowtie, MrBayes, SAMtools

          • in chemistry: ABINIT, CP2K, Gaussian, Gromacs, LAMMPS, NWChem, Quantum Espresso, Siesta, VASP

          • in engineering: COMSOL, OpenFOAM, Telemac

          • in mathematics: JAGS, MATLAB, R

          • for visuzalization: Gnuplot, ParaView.

          Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

          "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

          All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

          Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

          A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

          "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

          A typical workflow looks like:

          1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

          2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

          3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

          4. Create a job script and submit your job (see Running batch jobs)

          5. Get some coffee and be patient:

            1. Your job gets into the queue

            2. Your job gets executed

            3. Your job finishes

          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

          When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

          Do not hesitate to contact the UAntwerpen-HPC staff for any help.

          1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

          "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

          This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

          • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

          • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

          • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

          • To use a situational parameter, remove one '#' at the beginning of the line.

          simple_jobscript.sh
          #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
          "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

          Here's an example of a single-core job script:

          single_core.sh
          #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
          1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

          2. A module for Python 3.6 is loaded, see also section Modules.

          3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

          4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

          5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

          "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

          Here's an example of a multi-core job script that uses mympirun:

          multi_core.sh
          #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

          An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

          "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

          If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

          This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

          timeout.sh
          #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

          The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

          example_program.sh
          #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
          "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

          A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

          "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

          Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

          After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

          When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

          and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

          This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

          "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

          A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

          To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

          We can see all available versions of the SciPy module by using module avail SciPy-bundle:

          $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

          Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

          Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

          $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

          The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

          It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
          This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

          If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

          Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

          "}, {"location": "known_issues/", "title": "Known issues", "text": "

          This page provides details on a couple of known problems, and the workarounds that are available for them.

          If you have any questions related to these issues, please contact the UAntwerpen-HPC.

          • Operation not permitted error for MPI applications
          "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

          When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

          Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

          This error means that an internal problem has occurred in OpenMPI.

          "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

          This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

          It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

          "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

          We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

          "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

          A workaround as been implemented in mympirun (version 5.4.0).

          Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

          module load vsc-mympirun\n

          and launch your MPI application using the mympirun command.

          For more information, see the mympirun documentation.

          "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

          If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

          export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
          "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

          We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

          "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

          There are two important motivations to engage in parallel programming.

          1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

          2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

          On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

          There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

          Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

          Tip

          You can request more nodes/cores by adding following line to your run script.

          #PBS -l nodes=2:ppn=10\n
          This queues a job that claims 2 nodes and 10 cores.

          Warning

          Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

          Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

          This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

          Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

          Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

          Go to the example directory:

          cd ~/examples/Multi-core-jobs-Parallel-Computing\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          Study the example first:

          T_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

          And compile it (whilst including the thread library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Now, run it on the cluster and check the output:

          $ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Tip

          If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

          OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

          An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

          Here is the general code structure of an OpenMP program:

          #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

          "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

          By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

          "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

          Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

          omp1.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

          Now run it in the cluster and check the result again.

          $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
          "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

          Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

          omp2.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

          Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

          omp3.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

          There are a host of other directives you can issue using OpenMP.

          Some other clauses of interest are:

          1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

          2. nowait: threads will not wait until everybody is finished

          3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

          4. if: allows you to parallelise only if a certain condition is met

          5. ...\u00a0and a host of others

          Tip

          If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

          The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

          In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

          The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

          The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

          One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

          Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

          Study the MPI-programme and the PBS-file:

          mpi_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
          mpi_hello.pbs
          #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

          and compile it:

          $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

          mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

          Run the parallel program:

          $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

          The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

          MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

          Tip

          If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

          "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

          A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

          Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

          These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

          One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

          The \"Worker framework\" has been developed to address this issue.

          It can handle many small jobs determined by:

          parameter variations

          i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

          job arrays

          i.e., each individual job got a unique numeric identifier.

          Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

          However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

          "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/par_sweep\n

          Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

          $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

          For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

          par_sweep/weather
          #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

          A job script that would run this as a job for the first parameters (p01) would then look like:

          par_sweep/weather_p01.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

          When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

          To submit the job, the user would use:

           $ qsub weather_p01.pbs\n
          However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

          $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

          It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

          In order to make our PBS generic, the PBS file can be modified as follows:

          par_sweep/weather.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

          Note that:

          1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

          2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

          3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

          The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

          The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

          $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

          Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

          Warning

          When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

          module swap env/slurm/donphan\n

          instead of

          module swap cluster/donphan\n
          We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

          "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/job_array\n

          As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

          The following bash script would submit these jobs all one by one:

          #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

          This, as said before, could be disturbing for the job scheduler.

          Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

          Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

          The details are

          1. a job is submitted for each number in the range;

          2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

          3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

          The job could have been submitted using:

          qsub -t 1-100 my_prog.pbs\n

          The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

          To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

          A typical job script for use with job arrays would look like this:

          job_array/job_array.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

          Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

          $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

          For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

          job_array/test_set
          #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

          Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

          job_array/test_set.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          Note that

          1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

          2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

          The job is now submitted as follows:

          $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

          The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

          Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

          $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
          "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

          Often, an embarrassingly parallel computation can be abstracted to three simple steps:

          1. a preparation phase in which the data is split up into smaller, more manageable chunks;

          2. on these chunks, the same algorithm is applied independently (these are the work items); and

          3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

          The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

          cd ~/examples/Multi-job-submission/map_reduce\n

          The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

          First study the scripts:

          map_reduce/pre.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
          map_reduce/post.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

          Then one can submit a MapReduce style job as follows:

          $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

          Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

          "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

          The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

          The \"Worker Framework\" will be effective when

          1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

          2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

          "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

          Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

          tail -f run.pbs.log433253.leibniz\n

          Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

          watch -n 60 wsummarize run.pbs.log433253.leibniz\n

          This will summarise the log file every 60 seconds.

          "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

          Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

          Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

          Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

          "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

          Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

          wresume -jobid 433253.leibniz\n

          This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

          wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

          Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

          wresume -jobid 433253.leibniz -retry\n

          By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

          "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

          This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

          $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
          "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

          When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

          to check for the available versions of worker, use the following command:

          $ module avail worker\n
          1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

          "}, {"location": "mympirun/", "title": "Mympirun", "text": "

          mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

          In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

          "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

          Before using mympirun, we first need to load its module:

          module load vsc-mympirun\n

          As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

          The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

          For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

          "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

          There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

          By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

          "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

          This is the most commonly used option for controlling the number of processing.

          The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

          $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
          "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

          There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

          See vsc-mympirun README for a detailed explanation of these options.

          "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

          You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

          $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
          "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

          In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

          "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

          There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

          • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

            • see also http://openfoam.com/history/
          • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

            • see also https://openfoam.org/download/history/
          • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

          Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

          "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

          The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

          • OpenFOAM websites:

            • https://openfoam.com

            • https://openfoam.org

            • http://wikki.gridcore.se/foam-extend

          • OpenFOAM user guides:

            • https://www.openfoam.com/documentation/user-guide

            • https://cfd.direct/openfoam/user-guide/

          • OpenFOAM C++ source code guide: https://cpp.openfoam.org

          • tutorials: https://wiki.openfoam.com/Tutorials

          • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

          Other useful OpenFOAM documentation:

          • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

          • http://www.dicat.unige.it/guerrero/openfoam.html

          "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

          To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

          "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

          First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

          $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

          To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

          To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

          module load OpenFOAM/11-foss-2023a\n
          "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

          OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

          source $FOAM_BASH\n
          "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

          If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

          source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

          Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

          "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

          If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

          unset $FOAM_SIGFPE\n

          Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

          As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

          "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

          The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

          • generate the mesh;

          • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

          After running the simulation, some post-processing steps are typically performed:

          • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

          • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

          Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

          Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

          One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

          For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

          "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

          For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

          "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

          When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

          You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

          "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

          It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

          See Basic usage for how to get started with mympirun.

          To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

          export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

          Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

          "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

          To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

          Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

          number of processor directories = 4 is not equal to the number of processors = 16\n

          In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

          • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

          • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

          See Controlling number of processes to control the number of process mympirun will start.

          This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

          To visualise the processor domains, use the following command:

          mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

          and then load the VTK files generated in the VTK folder into ParaView.

          "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

          OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

          Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

          • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

          • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

          • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

          • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

          • if the results per individual time step are large, consider setting writeCompression to true;

          For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

          These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

          "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

          See https://cfd.direct/openfoam/user-guide/compiling-applications/.

          "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

          Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

          OpenFOAM_damBreak.sh
          #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
          "}, {"location": "program_examples/", "title": "Program examples", "text": "

          If you have not done so already copy our examples to your home directory by running the following command:

           cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          ~(tilde) refers to your home directory, the directory you arrive by default when you login.

          Go to our examples:

          cd ~/examples/Program-examples\n

          Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

          1. 01_Python

          2. 02_C_C++

          3. 03_Matlab

          4. 04_MPI_C

          5. 05a_OMP_C

          6. 05b_OMP_FORTRAN

          7. 06_NWChem

          8. 07_Wien2k

          9. 08_Gaussian

          10. 09_Fortran

          11. 10_PQS

          The above 2 OMP directories contain the following examples:

          C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

          Compile by any of the following commands:

          Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

          Be invited to explore the examples.

          "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

          Remember to substitute the usernames, login nodes, file names, ...for your own.

          Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

          Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

          "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

          Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

          This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

          It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

          "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

          As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

          For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

          $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

          Initially there will be only one RHEL 9 login node. As needed a second one will be added.

          When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

          "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

          To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

          This includes (per user):

          • max. of 2 CPU cores in use
          • max. 8 GB of memory in use

          For more intensive tasks you can use the interactive and debug clusters through the web portal.

          "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

          The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

          However, there will be impact on the availability of software that is made available via modules.

          Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

          This includes all software installations on top of a compiler toolchain that is older than:

          • GCC(core)/12.3.0
          • foss/2023a
          • intel/2023a
          • gompi/2023a
          • iimpi/2023a
          • gfbf/2023a

          (or another toolchain with a year-based version older than 2023a)

          The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

          foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

          If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

          It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

          "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

          We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

          cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

          Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

          We will keep this page up to date when more specific dates have been planned.

          Warning

          This planning below is subject to change, some clusters may get migrated later than originally planned.

          Please check back regularly.

          "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

          If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

          "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

          In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

          When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

          The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

          "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

          Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

          "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

          The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

          All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

          "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

          In order to administer the active software and their environment variables, the module system has been developed, which:

          1. Activates or deactivates software packages and their dependencies.

          2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

          3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

          4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

          5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

          This is all managed with the module command, which is explained in the next sections.

          There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

          "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

          A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

          module available\n

          It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

          This will give some output such as:

          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          This gives a full list of software packages that can be loaded.

          The casing of module names is important: lowercase and uppercase letters matter in module names.

          "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

          The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

          Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

          E.g., foss/2024a is the first version of the foss toolchain in 2024.

          The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

          "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

          To \"activate\" a software package, you load the corresponding module file using the module load command:

          module load example\n

          This will load the most recent version of example.

          For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

          **However, you should specify a particular version to avoid surprises when newer versions are installed:

          module load secondexample/2.7-intel-2016b\n

          The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

          Modules need not be loaded one by one; the two module load commands can be combined as follows:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

          "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

          Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

          $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          You can also just use the ml command without arguments to list loaded modules.

          It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

          "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

          To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

          $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

          To unload the secondexample module, you can also use ml -secondexample.

          Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

          "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

          In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

          module purge\n

          However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

          "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

          Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

          Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

          module load example\n

          rather than

          module load example/1.2.3\n

          Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

          Consider the following example modules:

          $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

          Let's now generate a version conflict with the example module, and see what happens.

          $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

          Note: A module swap command combines the appropriate module unload and module load commands.

          "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

          With the module spider command, you can search for modules:

          $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

          It's also possible to get detailed information about a specific module:

          $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
          "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

          To get a list of all possible commands, type:

          module help\n

          Or to get more information about one specific module package:

          $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
          "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

          If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

          In each module command shown below, you can replace module with ml.

          First, load all modules you want to include in the collections:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          Now store it in a collection using module save. In this example, the collection is named my-collection.

          module save my-collection\n

          Later, for example in a jobscript or a new session, you can load all these modules with module restore:

          module restore my-collection\n

          You can get a list of all your saved collections with the module savelist command:

          $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

          To get a list of all modules a collection will load, you can use the module describe command:

          $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          To remove a collection, remove the corresponding file in $HOME/.lmod.d:

          rm $HOME/.lmod.d/my-collection\n
          "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

          To see how a module would change the environment, you can use the module show command:

          $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

          It's also possible to use the ml show command instead: they are equivalent.

          Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

          You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

          If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

          "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

          To check how much jobs are running in what queues, you can use the qstat -q command:

          $ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

          Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

          "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

          You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

          You can also get this information in text form (per cluster separately) with the pbsmon command:

          $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

          pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

          "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

          Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

          As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

          Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

          cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          First go to the directory with the first examples by entering the command:

          cd ~/examples/Running-batch-jobs\n

          Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

          The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

          A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

          1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

          Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

          List and check the contents with:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

          In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

          1. The Perl script calculates the first 30 Fibonacci numbers.

          2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

          We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

          On the command line, you would run this using:

          $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

          Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

          The job script contains a description of the job by specifying the command that need to be executed on the compute node:

          fibo.pbs
          #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

          $ qsub fibo.pbs\n433253.leibniz\n

          The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

          Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

          To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

          Your job is now waiting in the queue for a free workernode to start on.

          Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

          After your job was started, and ended, check the contents of the directory:

          $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

          Explore the contents of the 2 new files:

          $ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

          These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

          "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

          It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

          To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

          Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

          env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

          We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

          We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

          "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

          Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

          qstat 12345\n

          To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

          ::: prompt :::

          This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

          To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

          ::: prompt :::

          To show on which compute nodes your job is running, at least, when it is running:

          qstat -n 12345\n

          To remove a job from the queue so that it will not run, or to stop a job that is already running.

          qdel 12345\n

          When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

          $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

          Here:

          Job ID the job's unique identifier

          Name the name of the job

          User the user that owns the job

          Time Use the elapsed walltime for the job

          Queue the queue the job is in

          The state S can be any of the following:

          State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

          User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

          "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

          As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

          You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

          The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

          ::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

          And to get the full detail of all the jobs, which are in the system:

          ::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

          eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

          135 eligible jobs

          blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

          There are 3 categories, the , and jobs.

          Active jobs

          are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

          Started

          attempting to start, performing pre-start tasks

          Running

          currently executing the user application

          Suspended

          has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

          Cancelling

          has been cancelled, in process of cleaning up

          Eligible jobs

          are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

          Blocked jobs

          are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

          Idle

          when the job violates a fairness policy

          Userhold

          or systemhold when it is user or administrative hold

          Batchhold

          when the requested resources are not available or the resource manager has repeatedly failed to start the job

          Deferred

          when a temporary hold when the job has been unable to start after a specified number of attempts

          Notqueued

          when scheduling daemon is unavailable

          "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

          Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

          It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

          "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

          The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

          qsub -l walltime=2:30:00 ...\n

          For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

          If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

          qsub -l mem=4gb ...\n

          The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

          qsub -l nodes=5:ppn=2 ...\n

          The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

          qsub -l nodes=1:westmere\n

          The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

          These options can either be specified on the command line, e.g.

          qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

          or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          Note that the resources requested on the command line will override those specified in the PBS file.

          "}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

          The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

          ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

          Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

          shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

          To get a list of all properties defined for all nodes, enter

          ::: prompt :::

          This list will also contain properties referring to, e.g., network components, rack number, etc.

          "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

          At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

          When you navigate to that directory and list its contents, you should see them:

          $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

          In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

          Inspect the generated output and error files:

          $ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
          "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

          Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

          You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

          ::: prompt :::

          Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

          ::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

          "}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

          You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

          #PBS -m b \n#PBS -m e \n#PBS -m a\n

          or

          #PBS -m abe\n

          These options can also be specified on the command line. Try it and see what happens:

          qsub -m abe fibo.pbs\n

          The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

          qsub -m b -M john.smith@example.com fibo.pbs\n

          will send an e-mail to john.smith@example.com when the job begins.

          "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

          If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

          So the following example might go wrong:

          $ qsub job1.sh\n$ qsub job2.sh\n

          You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

          $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

          afterok means \"After OK\", or in other words, after the first job successfully completed.

          It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

          1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

          "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

          Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

          Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

          Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

          The syntax for qsub for submitting an interactive PBS job is:

          $ qsub -I <... pbs directives ...>\n
          "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

          Tip

          Find the code in \"~/examples/Running_interactive_jobs\"

          First of all, in order to know on which computer you're working, enter:

          $ hostname -f\nln2.leibniz.uantwerpen.vsc\n

          This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

          The most basic way to start an interactive job is the following:

          $ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

          There are two things of note here.

          1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

          2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

          In order to know on which compute-node you're working, enter again:

          $ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

          Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

          This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

          The computer \"r1c02cn3\" stands for:

          1. \"r5\" is rack #5.

          2. \"c3\" is enclosure/chassis #3.

          3. \"cn08\" is compute node #08.

          With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

          Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

          $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

          You can exit the interactive session with:

          $ exit\n

          Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

          You can work for 3 hours by:

          qsub -I -l walltime=03:00:00\n

          If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

          "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

          To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

          The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

          Download the latest version of the XQuartz package on: http://xquartz.macosforge.org/landing/ and install the XQuartz.pkg package.

          The installer will take you through the installation procedure, just continue clicking Continue on the various screens that will pop-up until your installation was successful.

          A reboot is required before XQuartz will correctly open graphical applications.

          "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

          We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

          Now run the message program:

          cd ~/examples/Running_interactive_jobs\n./message.py\n

          You should see the following message appearing.

          Click any button and see what happens.

          -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
          "}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

          In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

          ::: prompt :::

          And start the MATLAB interactive environment:

          ::: prompt :::

          And start the fibo2.m program in the command window:

          ::: prompt fx >> :::

          ::: center :::

          And see the displayed calculations, ...

          ::: center :::

          as well as the nice \"plot\" appearing:

          ::: center :::

          You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

          ::: prompt fx >> :::

          "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

          You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

          "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

          First go to the directory:

          cd ~/examples/Running_jobs_with_input_output_data\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          ```

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

          List and check the contents with:

          $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

          Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

          file1.py
          #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

          The code of the Python script, is self explanatory:

          1. In step 1, we write something to the file hello.txt in the current directory.

          2. In step 2, we write some text to stdout.

          3. In step 3, we write to stderr.

          Check the contents of the first job script:

          file1a.pbs
          #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

          You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

          Submit it:

          qsub file1a.pbs\n

          After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

          $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

          Some observations:

          1. The file Hello.txt was created in the current directory.

          2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

          3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

          Inspect their contents ...\u00a0and remove the files

          $ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

          Tip

          Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

          "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

          Check the contents of the job script and execute it.

          file1b.pbs
          #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

          Inspect the contents again ...\u00a0and remove the generated files:

          $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

          Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

          "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

          You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

          file1c.pbs
          #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
          "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

          The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

          Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

          The following locations are available:

          Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

          Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

          We elaborate more on the specific function of these locations in the following sections.

          "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

          Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

          The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

          The operating system also creates a few files and folders here to manage your account. Examples are:

          File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

          Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

          "}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

          In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

          The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

          "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

          To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

          You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

          Each type of scratch has its own use:

          Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

          Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

          At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

          Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

          Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

          Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

          The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

          With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

          ::: prompt ---------------------------------------------------------- Your quota is:

          Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

          File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

          :::

          Make sure to regularly check these numbers at log-in!

          The rules are:

          1. You will only receive a warning when you have reached the soft limit of either quota.

          2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

          3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

          We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

          "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

          Tip

          Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

          In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

          1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

          2. repeat this action 30.000 times;

          3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

          $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

          Tip

          Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

          In this exercise, you will

          1. Generate the file \"primes_1.txt\" again as in the previous exercise;

          2. open the file;

          3. read it line by line;

          4. calculate the average of primes in the line;

          5. count the number of primes found per line;

          6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job:

          $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

          The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

          • the amount of disk space; and

          • the number of files

          that can be made available to each individual UAntwerpen-HPC user.

          The quota of disk space and number of files for each UAntwerpen-HPC user is:

          HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

          Tip

          The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

          Tip

          "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

          The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

          $ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          or on the UAntwerp clusters

          $ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

          Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

          $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

          This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

          If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

          $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

          If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

          $ du -s\n5632 .\n$ du -s -h\n

          If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

          $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

          Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

          $ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

          We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

          Try:

          $ tree -s -d\n

          However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

          "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

          Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

          Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

          To change the group of a directory and it's underlying directories and files, you can use:

          chgrp -R groupname directory\n
          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
          1. Get the group name you want to belong to.

          2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
          1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

          2. Fill out the group name. This cannot contain spaces.

          3. Put a description of your group in the \"Info\" field.

          4. You will now be a member and moderator of your newly created group.

          "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

          Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

          "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

          You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

          $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

          We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

          "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

          A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

          "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

          This section will explain how to create, activate, use and deactivate Python virtual environments.

          "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

          A Python virtual environment can be created with the following command:

          python -m venv myenv      # Create a new virtual environment named 'myenv'\n

          This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

          Warning

          When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

          "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

          To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

          source myenv/bin/activate                    # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

          After activating the virtual environment, you can install additional Python packages with pip install:

          pip install example_package1\npip install example_package2\n

          These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

          It is now possible to run Python scripts that use the installed packages in the virtual environment.

          Tip

          When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

          Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

          To check if a package is available as a module, use:

          module av package_name\n

          Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

          module show module_name\n

          to check which extensions are included in a module (if any).

          "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

          Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

          example.py
          import example_package1\nimport example_package2\n...\n
          python example.py\n
          "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

          When you are done using the virtual environment, you can deactivate it. To do that, run:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

          You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

          pytorch_poutyne.py
          import torch\nimport poutyne\n\n...\n

          We load a PyTorch package as a module and install Poutyne in a virtual environment:

          module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

          While the virtual environment is activated, we can run the script without any issues:

          python pytorch_poutyne.py\n

          Deactivate the virtual environment when you are done:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

          To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

          module swap cluster/donphan\nqsub -I\n

          After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

          Naming a virtual environment

          When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

          python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
          "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

          This section will combine the concepts discussed in the previous sections to:

          1. Create a virtual environment on a specific cluster.
          2. Combine packages installed in the virtual environment with modules.
          3. Submit a job script that uses the virtual environment.

          The example script that we will run is the following:

          pytorch_poutyne.py
          import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

          First, we create a virtual environment on the donphan cluster:

          module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

          Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

          jobscript.pbs
          #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

          Next, we submit the job script:

          qsub jobscript.pbs\n

          Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

          "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

          Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

          For example, if we create a virtual environment on the skitty cluster,

          module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

          return to the login node by pressing CTRL+D and try to use the virtual environment:

          $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

          we are presented with the illegal instruction error. More info on this here

          "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

          When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

          python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

          Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

          "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

          There are two main reasons why this error could occur.

          1. You have not loaded the Python module that was used to create the virtual environment.
          2. You loaded or unloaded modules while the virtual environment was activated.
          "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

          If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

          The following commands illustrate this issue:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

          module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

          You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          The solution is to only modify modules when not in a virtual environment.

          "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

          Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

          This documentation only covers aspects of using Singularity on the infrastructure.

          "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

          The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via .

          "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

          Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

          When you make Singularity or convert Docker images you have some restrictions:

          • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
          "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          ::: prompt :::

          Create a job script like:

          Create an example myscript.sh:

          ::: code bash #!/bin/bash

          # prime factors factor 1234567 :::

          "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          ::: prompt :::

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before singularity execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          ::: prompt :::

          For example to compile an MPI example:

          ::: prompt :::

          Example MPI job script:

          "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

          The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

          As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

          In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

          In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

          • Title and nickname
          • Start and end date for your course or training
          • VSC-ids of all teachers/trainers
          • Participants based on UGent Course Code and/or list of VSC-ids
          • Optional information
            • Additional storage requirements
              • Shared folder
              • Groups folder for collaboration
              • Quota
            • Reservation for resource requirements beyond the interactive cluster
            • Ticket number for specific software needed for your course/training
            • Details for a custom Interactive Application in the webportal

          In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

          Please make these requests well in advance, several weeks before the start of your course/workshop.

          "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

          The title of the course or training can be used in e.g. reporting.

          The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

          When choosing the nickname, try to make it unique, but this is not enforced nor checked.

          "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

          The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

          The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

          • Course group and subgroups will be deactivated
          • Residual data in the course directories will be archived or deleted
          • Custom Interactive Applications will be disabled
          "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

          A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

          This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

          Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

          "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

          The management of the list of students or participants depends if this is a UGent course or a training/workshop.

          "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

          Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

          The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

          Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

          A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

          "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

          (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

          "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

          For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

          This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

          Every course directory will always contain the folders:

          • input
            • ideally suited to distribute input data such as common datasets
            • moderators have read/write access
            • group members (students) only have read access
          • members
            • this directory contains a personal folder for every student in your course members/vsc<01234>
            • only this specific VSC-id will have read/write access to this folder
            • moderators have read access to this folder
          "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

          Optionally, we can also create these folders:

          • shared
            • this is a folder for sharing files between any and all group members
            • all group members and moderators have read/write access
            • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
          • groups
            • a number of groups/group_<01> folders are created under the groups folder
            • these folders are suitable if you want to let your students collaborate closely in smaller groups
            • each of these group_<01> folders are owned by a dedicated group
            • teachers are automatically made moderators of these dedicated groups
            • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
            • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

          If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

          • shared: yes
          • subgroups: <number of (sub)groups>
          "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

          There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

          • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
          • member quota (defaults 5 GB volume and 10k files) are per student/participant

          The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

          "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

          The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

          "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

          We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

          Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

          Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

          Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

          "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

          In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

          We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

          Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

          "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

          HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

          A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

          If you would like this for your course, provide more details in your teaching request, including:

          • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

          • which cluster you want to use

          • how many nodes/cores/GPUs are needed

          • which software modules you are loading

          • custom code you are launching (e.g. autostart a GUI)

          • required environment variables that you are setting

          • ...

          We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

          A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

          "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

          Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

          "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

          Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

          "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

          Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

          "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

          Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

          For example:

          $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

          "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

          Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

          Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

          The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

          example.sh:

          #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

          Running the following command:

          $ qsub --dryrun example.sh -N example\n

          will generate this output:

          Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

          With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

          Similarly to the --dryrun example, we start by running the following command:

          $ qsub --debug example.sh -N example\n

          which generates this output:

          DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
          The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

          "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

          Below is a list of the most common and useful directives.

          Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

          TORQUE-related environment variables in batch job scripts.

          # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

          IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

          When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

          Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

          Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

          "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

          When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

          To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

          Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

          Other reasons why using more cores may not lead to a (significant) speedup include:

          • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

          • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

          • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

          • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

          • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

          • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

          More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

          "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

          When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

          Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

          Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

          Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

          An example of how you can make beneficial use of multiple nodes can be found here.

          You can also use MPI in Python, some useful packages that are also available on the HPC are:

          • mpi4py
          • Boost.MPI

          We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

          "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

          If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

          If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

          "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

          If you get from your job output an error message similar to this:

          =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

          This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

          "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

          Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

          "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

          If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

          If you have errors that look like:

          vsc20167@login.hpc.uantwerpen.be: Permission denied\n

          or you are experiencing problems with connecting, here is a list of things to do that should help:

          1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

          2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

          3. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

          4. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

          5. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

          6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

          7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

          If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

          Please add -vvv as a flag to ssh like:

          ssh -vvv vsc20167@login.hpc.uantwerpen.be\n

          and include the output of that command in the message.

          "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

          If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

          You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

          If the fingerprint matches, click \"Yes\".

          If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

          If you get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          or

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

          It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

          "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

          If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

          The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

          Make sure the fingerprint in the alert matches one of the following:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

          To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

          Note

          Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

          "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

          If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

          Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

          You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

          "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

          See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

          "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

          All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

          vsc20167@ln01[203] $\n

          When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

          Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

          Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

          $ echo This is a test\nThis is a test\n

          Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

          More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

          $ ls --help \n$ man ls\n$ info ls\n

          (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

          "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

          In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

          Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

          echo \"Hello! This is my hostname:\" \nhostname\n

          You can type both lines at your shell prompt, and the result will be the following:

          $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

          Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

          $ vi foo\n

          or use the following commands:

          echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

          The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

          $ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          Congratulations, you just created and started your first shell script!

          A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

          You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

          $ which bash\n/bin/bash\n

          We edit our script and change it with this information:

          #!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

          Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

          Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

          chmod +x foo\n

          Now you can start your script by simply executing it:

          $ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          The same technique can be used for all other scripting languages, like Perl and Python.

          Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

          "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

          The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

          Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

          To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

          Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

          Through this web portal, you can:

          • browse through the files & directories in your VSC account, and inspect, manage or change them;

          • consult active jobs (across all HPC-UGent Tier-2 clusters);

          • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

          • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

          • open a terminal session directly in your web browser;

          More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

          "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

          All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

          "}, {"location": "web_portal/#login", "title": "Login", "text": "

          When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

          "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

          The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

          Please click \"Authorize\" here.

          This request will only be made once, you should not see this again afterwards.

          "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

          Once logged in, you should see this start page:

          This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

          If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

          "}, {"location": "web_portal/#features", "title": "Features", "text": "

          We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

          "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

          Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

          The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

          Here you can:

          • Click a directory in the tree view on the left to open it;

          • Use the buttons on the top to:

            • go to a specific subdirectory by typing in the path (via Go To...);

            • open the current directory in a terminal (shell) session (via Open in Terminal);

            • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

            • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

            • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

            • show the owner and permissions in the file listing (via Show Owner/Mode);

          • Double-click a directory in the file listing to open that directory;

          • Select one or more files and/or directories in the file listing, and:

            • use the View button to see the contents (use the button at the top right to close the resulting popup window);

            • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

            • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

            • use the Download button to download the selected files and directories from your VSC account to your local workstation;

            • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

            • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

            • use the Delete button to (permanently!) remove the selected files and directories;

          For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

          "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

          Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

          For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

          "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

          To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

          A new browser tab will be opened that shows all your current queued and/or running jobs:

          You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

          Jobs that are still queued or running can be deleted using the red button on the right.

          Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

          For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

          "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

          To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

          This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

          You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

          Don't forget to actually submit your job to the system via the green Submit button!

          "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

          In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

          "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

          Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

          Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

          To exit the shell session, type exit followed by Enter and then close the browser tab.

          Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

          "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

          To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

          You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

          Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

          To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

          "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

          See dedicated page on Jupyter notebooks

          "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

          In case of problems with the web portal, it could help to restart the web server running in your VSC account.

          You can do this via the Restart Web Server button under the Help menu item:

          Of course, this only affects your own web portal session (not those of others).

          "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
          • ABAQUS for CAE course
          "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

          X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

          1. A graphical remote desktop that works well over low bandwidth connections.

          2. Copy/paste support from client to server and vice-versa.

          3. File sharing from client to server.

          4. Support for sound.

          5. Printer sharing from client to server.

          6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

          "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

          X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

          X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

          "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

          After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

          There are two ways to connect to the login node:

          • Option A: A direct connection to \"login.hpc.uantwerpen.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

          • Option B: You can use the node \"login.hpc.uantwerpen.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

          "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

          This is the easier way to setup X2Go, a direct connection to the login node.

          1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

          2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

          3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

          4. Set the SSH port (22 by default).

          5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

            1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

            2. You should look for your private SSH key generated in Generating a public/private key pair. This file has been stored in the directory \"~/.ssh/\" (by default \"id_rsa\"). \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing cmd+shift+g , which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"/.ssh\" and press enter. Choose that file and click on open .

          6. Check \"Try autologin\" option.

          7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

            1. [optional]: Set a single application like Terminal instead of XFCE desktop.

          8. [optional]: Change the session icon.

          9. Click the OK button after these changes.

          "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

          This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

          1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

          2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

          3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

            1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

            2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

            3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

            4. Click the OK button after these changes.

          "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

          Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

          X2Go will keep the session open for you (but only if the login node is not rebooted).

          "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

          If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

          hostname\n

          This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

          "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

          If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

          "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

          The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

          To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

          Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

          After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

          Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

          "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

          TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

          Loads MNIST datasets and trains a neural network to recognize hand-written digits.

          Runtime: ~1 min. on 8 cores (Intel Skylake)

          See https://www.tensorflow.org/tutorials/quickstart/beginner

          "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

          Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

          These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

          The guide aims to make you familiar with the Linux command line environment quickly.

          The tutorial goes through the following steps:

          1. Getting Started
          2. Navigating
          3. Manipulating files and directories
          4. Uploading files
          5. Beyond the basics

          Do not forget Common pitfalls, as this can save you some troubleshooting.

          "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
          • More on the HPC infrastructure.
          • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
          "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

          Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

          "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

          To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

          First, it's important to make a distinction between two different output channels:

          1. stdout: standard output channel, for regular output

          2. stderr: standard error channel, for errors and warnings

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

          > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

          $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

          >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

          $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

          < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

          One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

          $ find . -name .txt > files\n$ xargs grep banana < files\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

          To redirect the stderr output (warnings, messages), you can use 2>, just like >

          $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

          To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

          $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

          Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

          $ ls | wc -l\n    42\n

          A common pattern is to pipe the output of a command to less so you can examine or search the output:

          $ find . | less\n

          Or to look through your command history:

          $ history | less\n

          You can put multiple pipes in the same line. For example, which cp commands have we run?

          $ history | grep cp | less\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

          The shell will expand certain things, including:

          1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

          2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

          3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

          4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

          "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

          ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

          $ ps -fu $USER\n

          To see all the processes:

          $ ps -elf\n

          To see all the processes in a forest view, use:

          $ ps auxf\n

          The last two will spit out a lot of data, so get in the habit of piping it to less.

          pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

          pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

          "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

          ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

          $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

          Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

          $ kill -9 1234\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

          top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

          To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

          There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

          To exit top, use q (for 'quit').

          For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

          "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

          ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

          $ ulimit -a\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

          To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

          $ wc example.txt\n      90     468     3189   example.txt\n

          The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

          To only count the number of lines, use wc -l:

          $ wc -l example.txt\n      90    example.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

          grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

          $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

          grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

          "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

          cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

          $ cut -f 1 -d ',' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

          sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

          $ sed 's/oldtext/newtext/g' myfile.txt\n

          By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

          "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

          awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

          First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

          $ awk '{print $4}' mydata.dat\n

          You can use -F ':' to change the delimiter (F for field separator).

          The next example is used to sum numbers from a field:

          $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

          The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

          However, there are some rules you need to abide by.

          Here is a very detailed guide should you need more information.

          "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

          The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

          #!/bin/sh\n
          #!/bin/bash\n
          #!/usr/bin/env bash\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

          Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

          if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
          Or only if a certain variable is bigger than one:
          if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
          Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

          In the initial example, we used -d to test if a directory existed. There are several more checks.

          Another useful example, is to test if a variable contains a value (so it's not empty):

          if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

          the -z will check if the length of the variable's value is greater than zero.

          "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

          Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

          Let's look at a simple example:

          for i in 1 2 3\ndo\necho $i\ndone\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

          Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

          CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

          In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

          "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

          Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

          Firstly a useful thing to know for debugging and testing is that you can run any command like this:

          command 2>&1 output.log   # one single output file, both output and errors\n

          If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

          If you want regular and error output separated you can use:

          command > output.log 2> output.err  # errors in a separate file\n

          this will write regular output to output.log and error output to output.err.

          You can then look for the errors with less or search for specific text with grep.

          In scripts, you can use:

          set -e\n

          This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

          "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

          Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

          command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

          If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

          Examples include:

          • modifying your $PS1 (to tweak your shell prompt)

          • printing information about the current/jobs environment (echoing environment variables, etc.)

          • selecting a specific cluster to run on with module swap cluster/...

          Some recommendations:

          • Avoid using module load statements in your $HOME/.bashrc file

          • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

          When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

          "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
          "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

          The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

          This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

          #PBS -l nodes=1:ppn=1 # single-core\n

          For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

          #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

          We intend to submit it on the long queue:

          #PBS -q long\n

          We request a total running time of 48 hours (2 days).

          #PBS -l walltime=48:00:00\n

          We specify a desired name of our job:

          #PBS -N FreeSurfer_per_subject-time-longitudinal\n
          This specifies mail options:
          #PBS -m abe\n

          1. a means mail is sent when the job is aborted.

          2. b means mail is sent when the job begins.

          3. e means mail is sent when the job ends.

          Joins error output with regular output:

          #PBS -j oe\n

          All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
          1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

          2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

          3. How many files and directories are in /tmp?

          4. What's the name of the 5th file/directory in alphabetical order in /tmp?

          5. List all files that start with t in /tmp.

          6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

          7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

          "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

          This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

          "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

          If you receive an error message which contains something like the following:

          No such file or directory\n

          It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

          Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

          "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

          Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

          $ cat some file\nNo such file or directory 'some'\n

          Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

          $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

          This is especially error-prone if you are piping results of find:

          $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

          This can be worked around using the -print0 flag:

          $ find . -type f -print0 | xargs -0 cat\n...\n

          But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

          "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

          If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

          $ rm -r ~/$PROJETC/*\n

          "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

          A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

          $ #rm -r ~/$POROJETC/*\n
          Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

          "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
          $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

          Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

          $ chmod +x script_name.sh\n

          "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

          If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

          If you need help about a certain command, you should consult its so-called \"man page\":

          $ man command\n

          This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

          Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

          "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
          1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

          2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

          3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

          4. basic shell usage

          5. Bash for beginners

          6. MOOC

          Please don't hesitate to contact in case of questions or problems.

          "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

          To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

          You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

          Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

          "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

          To get help:

          1. use the documentation available on the system, through the help, info and man commands (use q to exit).
            help cd \ninfo ls \nman cp \n
          2. use Google

          3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

          "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

          Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

          "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

          The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

          You use the shell by executing commands, and hitting <enter>. For example:

          $ echo hello \nhello \n

          You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

          To go through previous commands, use <up> and <down>, rather than retyping them.

          "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

          A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

          $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

          "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

          If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

          "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

          At the prompt we also have access to shell variables, which have both a name and a value.

          They can be thought of as placeholders for things we need to remember.

          For example, to print the path to your home directory, we can use the shell variable named HOME:

          $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

          This prints the value of this variable.

          "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

          There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

          For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

          $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

          You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

          $ env | sort | grep VSC\n

          But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

          $ export MYVARIABLE=\"value\"\n

          It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

          If we then do

          $ echo $MYVARIABLE\n

          this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

          "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

          You can change what your prompt looks like by redefining the special-purpose variable $PS1.

          For example: to include the current location in your prompt:

          $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

          Note that ~ is short representation of your home directory.

          To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

          $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

          "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

          One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

          This may lead to surprising results, for example:

          $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

          To understand what's going on here, see the section on cd below.

          The moral here is: be very careful to not use empty variables unintentionally.

          Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

          The -e option will result in the script getting stopped if any command fails.

          The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

          More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

          "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

          If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

          "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

          Basic information about the system you are logged into can be obtained in a variety of ways.

          We limit ourselves to determining the hostname:

          $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

          And querying some basic information about the Linux kernel:

          $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

          "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
          • Print the full path to your home directory
          • Determine the name of the environment variable to your personal scratch directory
          • What's the name of the system you\\'re logged into? Is it the same for everyone?
          • Figure out how to print the value of a variable without including a newline
          • How do you get help on using the man command?

          Next chapter teaches you on how to navigate.

          "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

          Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

          Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

          To figure out where your quota is being spent, the du (isk sage) command can come in useful:

          $ du -sh test\n59M test\n

          Do not (frequently) run du on directories where large amounts of data are stored, since that will:

          1. take a long time

          2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

          Software is provided through so-called environment modules.

          The most commonly used commands are:

          1. module avail: show all available modules

          2. module avail <software name>: show available modules for a specific software name

          3. module list: show list of loaded modules

          4. module load <module name>: load a particular module

          More information is available in section Modules.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

          The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

          Detailed information is available in section submitting your job.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

          Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

          Hint: python -c \"print(sum(range(1, 101)))\"

          • How many modules are available for Python version 3.6.4?
          • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
          • Which cluster modules are available?

          • What's the full path to your personal home/data/scratch directories?

          • Determine how large your personal directories are.
          • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

          Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

          To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

          $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

          To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
          $ cp source target\n

          This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

          $ cp -r sourceDirectory target\n

          A last more complicated example:

          $ cp -a sourceDirectory target\n

          Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
          $ mkdir directory\n

          which will create a directory with the given name inside the current directory.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
          $ mv source target\n

          mv will move the source path to the destination path. Works for both directories as files.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

          Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

          $ rm filename\n
          rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

          You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

          $ rmdir directory\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

          Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

          1. User - a particular user (account)

          2. Group - a particular group of users (may be user-specific group with only one member)

          3. Other - other users in the system

          The permission types are:

          1. Read - For files, this gives permission to read the contents of a file

          2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

          3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

          Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

          $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

          The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

          Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

          $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

          You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

          You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

          However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

          $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          This will give the user otheruser permissions to write to Project_GoldenDragon

          Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

          Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

          See https://linux.die.net/man/1/setfacl for more information.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

          Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

          $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

          Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

          $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

          Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

          $ unzip myfile.zip\n

          If we would like to make our own zip archive, we use zip:

          $ zip myfiles.zip myfile1 myfile2 myfile3\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

          Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

          You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

          $ tar -xf tarfile.tar\n

          Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

          $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

          Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

          # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

          If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

          $ tar -c source1 source2 source3 -f tarfile.tar\n
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
          1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

          2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

          3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

          4. Remove the another/test directory with a single command.

          5. Rename test to test2. Move test2/hostname.txt to your home directory.

          6. Change the permission of test2 so only you can access it.

          7. Create an empty job script named job.sh, and make it executable.

          8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

          The next chapter is on uploading files, especially important when using HPC-infrastructure.

          "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

          This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

          "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

          To print the current directory, use pwd or \\$PWD:

          $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

          "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

          A very basic and commonly used command is ls, which can be used to list files and directories.

          In its basic usage, it just prints the names of files and directories in the current directory. For example:

          $ ls\nafile.txt some_directory \n

          When provided an argument, it can be used to list the contents of a directory:

          $ ls some_directory \none.txt two.txt\n

          A couple of commonly used options include:

          • detailed listing using ls -l:

            $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • To print the size information in human-readable form, use the -h flag:

            $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • also listing hidden files using the -a flag:

            $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • ordering files by the most recent change using -rt:

            $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

          If you try to use ls on a file that doesn't exist, you will get a clear error message:

          $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
          "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

          To change to a different directory, you can use the cd command:

          $ cd some_directory\n

          To change back to the previous directory you were in, there's a shortcut: cd -

          Using cd without an argument results in returning back to your home directory:

          $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

          "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

          The file command can be used to inspect what type of file you're dealing with:

          $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
          "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

          An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

          Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

          A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

          Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

          There are two special relative paths worth mentioning:

          • . is a shorthand for the current directory
          • .. is a shorthand for the parent of the current directory

          You can also use .. when constructing relative paths, for example:

          $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
          "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

          Each file and directory has particular permissions set on it, which can be queried using ls -l.

          For example:

          $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

          The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

          1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
          2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
          3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
          4. the 3rd part r-- indicates that other users only have read permissions

          The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

          1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
          2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

          See also the chmod command later in this manual.

          "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

          find will crawl a series of directories and lists files matching given criteria.

          For example, to look for the file named one.txt:

          $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

          To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

          $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

          A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

          "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
          • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
          • When was your home directory created or last changed?
          • Determine the name of the last changed file in /tmp.
          • See how home directories are organised. Can you access the home directory of other users?

          The next chapter will teach you how to interact with files and directories.

          "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

          To transfer files from and to the HPC, see the section about transferring files of the HPC manual

          "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

          After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

          For example, you may see an error when submitting a job script that was edited on Windows:

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

          To fix this problem, you should run the dos2unix command on the file:

          $ dos2unix filename\n
          "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

          As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

          $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
          "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

          Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

          1. Open (\"Read\"): ^R

          2. Save (\"Write Out\"): ^O

          3. Exit: ^X

          More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

          "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

          rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

          You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

          For example, to copy a folder with lots of CSV files:

          $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

          will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

          The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

          To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

          To copy files to your local computer, you can also use rsync:

          $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
          This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

          See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

          "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
          1. Download the file /etc/hostname to your local computer.

          2. Upload a file to a subdirectory of your personal $VSC_DATA space.

          3. Create a file named hello.txt and edit it using nano.

          Now you have a basic understanding, see next chapter for some more in depth concepts.

          "}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

          (more info soon)

          "}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the UAntwerpen-HPC documentation", "text": "

          Use the menu on the left to navigate, or use the search box on the top right.

          You are viewing documentation intended for people using macOS.

          Use the OS dropdown in the top bar to switch to a different operating system.

          Quick links

          • Getting Started | Getting Access
          • FAQ | Troubleshooting | Best practices | Known issues

          If you find any problems in this documentation, please report them by mail to hpc@uantwerpen.be or open a pull request.

          If you still have any questions, you can contact the UAntwerpen-HPC.

          "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": ""}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

          An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

          See also: Running batch jobs.

          "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

          When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

          Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

          Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

          "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

          Modules each come with a suffix that describes the toolchain used to install them.

          Examples:

          • AlphaFold/2.2.2-foss-2021a

          • tqdm/4.61.2-GCCcore-10.3.0

          • Python/3.9.5-GCCcore-10.3.0

          • matplotlib/3.4.2-foss-2021a

          Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

          The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          You can use module avail [search_text] to see which versions on which toolchains are available to use.

          It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

          "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

          When incompatible modules are loaded, you might encounter an error like this:

          {{ lmod_error }}\n

          You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

          Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

          An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          See also: How do I choose the job modules?

          "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

          The 72 hour walltime limit will not be extended. However, you can work around this barrier:

          • Check that all available resources are being used. See also:
            • How many cores/nodes should I request?.
            • My job is slow.
            • My job isn't using any GPUs.
          • Use a faster cluster.
          • Divide the job into more parallel processes.
          • Divide the job into shorter processes, which you can submit as separate jobs.
          • Use the built-in checkpointing of your software.
          "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

          Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

          When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

          Try requesting a bit more memory than your proportional share, and see if that solves the issue.

          See also: Specifying memory requirements.

          "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

          When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

          It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

          See also: Running interactive jobs.

          "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

          Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

          Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

          "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

          There are a few possible causes why a job can perform worse than expected.

          Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

          Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

          Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

          "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

          Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

          To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

          See also: Multi core jobs/Parallel Computing and Mympirun.

          "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

          For example, we have a simple script (./hello.sh):

          #!/bin/bash \necho \"hello world\"\n

          And we run it like mympirun ./hello.sh --output output.txt.

          To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

          mympirun --output output.txt ./hello.sh\n
          "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

          In practice, it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires. New jobs may be submitted by other users that are assigned a higher priority than your job(s). You can use the squeue --start command to get an estimated start time for your jobs in the queue. Keep in mind that this is just an estimate.

          "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

          When trying to create files, errors like this can occur:

          No space left on device\n

          The error \"No space left on device\" can mean two different things:

          • all available storage quota on the file system in question has been used;
          • the inode limit has been reached on that file system.

          An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

          Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

          If the problem persists, feel free to contact support.

          "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

          NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

          See https://pintra.uantwerpen.be/bbcswebdav/xid-23610_1

          "}, {"location": "FAQ/#can-i-share-my-data-with-other-uantwerpen-hpc-users", "title": "Can I share my data with other UAntwerpen-HPC users?", "text": "

          Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

          $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc20167 mygroup      40 Apr 12 15:00 dataset.txt\n

          For more information about chmod or setfacl, see Linux tutorial.

          "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

          Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

          "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

          Please send an e-mail to hpc@uantwerpen.be that includes:

          • What software you want to install and the required version

          • Detailed installation instructions

          • The purpose for which you want to install the software

          If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

          "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

          On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

          MacOS & Linux (on Windows, only the second part is shown):

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

          Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

          "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

          A Virtual Organisation consists of a number of members and moderators. A moderator can:

          • Manage the VO members (but can't access/remove their data on the system).

          • See how much storage each member has used, and set limits per member.

          • Request additional storage for the VO.

          One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

          See also: Virtual Organisations.

          "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

          Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

          du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

          The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

          The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

          "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

          By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

          You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

          "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

          When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

          sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

          A lot of tasks can be performed without sudo, including installing software in your own account.

          Installing software

          • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
          • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
          "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

          Who can I contact?

          • General questions regarding HPC-UGent and VSC: hpc@ugent.be

          • HPC-UGent Tier-2: hpc@ugent.be

          • VSC Tier-1 compute: compute@vscentrum.be

          • VSC Tier-1 cloud: cloud@vscentrum.be

          "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

          Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

          "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

          The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

          "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

          Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

          module load hod\n
          "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

          The hod modules are constructed such that they can be used on the UAntwerpen-HPC login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

          As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

          For example, this will work as expected:

          $ module swap cluster/{{ othercluster }}\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

          Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

          "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

          The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

          $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

          By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

          If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

          Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

          "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

          After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

          These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

          You should occasionally clean this up using hod clean:

          $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/{{ defaultcluster }}(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        433253.leibniz         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/433253.leibniz for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/{{ othercluster }}\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.{{ othercluster }}.gent.vsc  &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.{{ othercluster }}.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
          Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

          "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

          If you have any questions, or are experiencing problems using HOD, you have a couple of options:

          • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

          • Contact the UAntwerpen-HPC via hpc@uantwerpen.be

          • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

          "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

          Note

          To run a MATLAB program on the UAntwerpen-HPC you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

          Compiling MATLAB programs is only possible on the interactive debug cluster, not on the UAntwerpen-HPC login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

          "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

          The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

          Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

          "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

          Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

          To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

          $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

          After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

          To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

          First, we copy the magicsquare.m example that comes with MATLAB to example.m:

          cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

          To compile a MATLAB program, use mcc -mv:

          mcc -mv example.m\nOpening log file:  /user/antwerpen/201/vsc20167/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/antwerpen/201/vsc20167/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/antwerpen/201/vsc20167/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
          "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

          To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

          It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

          For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

          "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

          If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

          export _JAVA_OPTIONS=\"-Xmx64M\"\n

          The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

          Another possible issue is that the heap size is too small. This could result in errors like:

          Error: Out of memory\n

          A possible solution to this is by setting the maximum heap size to be bigger:

          export _JAVA_OPTIONS=\"-Xmx512M\"\n
          "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

          MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

          The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

          You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

          parpool.m
          % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

          See also the parpool documentation.

          "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

          Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

          MATLAB_LOG_DIR=<OUTPUT_DIR>\n

          where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

          # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

          You should remove the directory at the end of your job script:

          rm -rf $MATLAB_LOG_DIR\n
          "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

          When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

          The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

          export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

          So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

          "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

          All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

          jobscript.sh
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
          "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

          Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

          Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

          "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

          First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

          $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'ln2.leibniz.uantwerpen.vsc:6 (vsc20167)' desktop is ln2.leibniz.uantwerpen.vsc:6\n\nCreating default startup script /user/antwerpen/201/vsc20167.vnc/xstartup\nCreating default config /user/antwerpen/201/vsc20167.vnc/config\nStarting applications specified in /user/antwerpen/201/vsc20167.vnc/xstartup\nLog file is /user/antwerpen/201/vsc20167.vnc/ln2.leibniz.uantwerpen.vsc:6.log\n

          When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

          Note down the details in bold: the hostname (in the example: ln2.leibniz.uantwerpen.vsc) and the (partial) port number (in the example: 6).

          It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

          "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

          You can get a list of running VNC servers on a node with

          $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

          This only displays the running VNC servers on the login node you run the command on.

          To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

          $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/ln2.leibniz.uantwerpen.vsc:6.pid\n.vnc/ln1.leibniz.uantwerpen.vsc:8.pid\n

          This shows that there is a VNC server running on ln2.leibniz.uantwerpen.vsc on port 5906 and another one running ln1.leibniz.uantwerpen.vsc on port 5908 (see also Determining the source/destination port).

          "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

          The VNC server runs on a (in the example above, on ln2.leibniz.uantwerpen.vsc).

          In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

          Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

          To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

          The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

          "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

          The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

          The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

          So, in our running example, both the source and destination ports are 5906.

          "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

          In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.uantwerpen.be (see Setting up the SSH tunnel(s)).

          If the login node you end up on is a different one than the one where your VNC server is running (i.e., ln1.leibniz.uantwerpen.vsc rather than ln2.leibniz.uantwerpen.vsc in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

          In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

          To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

          Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to ln2.leibniz.uantwerpen.vsc, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

          In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

          We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

          "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcuantwerpenbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.uantwerpen.be", "text": "

          First, we will set up the SSH tunnel from our workstation to .

          Use the settings specified in the sections above:

          • source port: the port on which the VNC server is running (see Determining the source/destination port);

          • destination host: localhost;

          • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

          Execute the following command to set up the SSH tunnel.

          ssh -L 5906:localhost:12345  vsc20167@login.hpc.uantwerpen.be\n

          Replace the source port 5906, destination port 12345 and user ID vsc20167 with your own!

          With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

          Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

          "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

          Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

          You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

          netstat -an | grep -i listen | grep tcp | grep 12345\n

          If you see no matching lines, then the port you picked is still available, and you can continue.

          If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

          $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
          "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

          In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.uantwerpen.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (ln2.leibniz.uantwerpen.vsc in our running example, see Starting a VNC server).

          To do this, run the following command:

          $ ssh -L 12345:localhost:5906 ln2.leibniz.uantwerpen.vsc\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

          With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (ln2.leibniz.uantwerpen.vsc).

          Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

          **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (ln2.leibniz.uantwerpen.vsc) in the command shown above!

          As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

          "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

          You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file ending in TurboVNC64-2.1.2.dmg (the version number can be different) and execute it.

          Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

          When prompted for a password, use the password you used to setup the VNC server.

          When prompted for default or empty panel, choose default.

          If you have an empty panel, you can reset your settings with the following commands:

          xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
          "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

          The VNC server can be killed by running

          vncserver -kill :6\n

          where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

          "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

          You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

          "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

          All users of Antwerp University Association (AUHA) can request an account on the UAntwerpen-HPC, which is part of the Flemish Supercomputing Centre (VSC).

          See HPC policies for more information on who is entitled to an account.

          The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

          There are two methods for connecting to UAntwerpen-HPC:

          • Using a terminal to connect via SSH.
          • Using the web portal

          The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

          If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

          The UAntwerpen-HPC clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the UAntwerpen-HPC. Access to the UAntwerpen-HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

          "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
          • an SSH public/private key pair can be seen as a lock and a key

          • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

          • the SSH private key is like a physical key: you don't hand it out to other people.

          • anyone who has the key (and the optional password) can unlock the door and log in to the account.

          • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

          Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). To open a Terminal window in macOS, open the Finder and choose

          >> Applications > Utilities > Terminal

          Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on macOS is using the OpenSSH client included with macOS, which you can then also use to log on to the clusters.

          "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

          Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

          \"Secure\" means that:

          1. the User is authenticated to the System; and

          2. the System is authenticated to the User; and

          3. all data is encrypted during transfer.

          OpenSSH is a FREE implementation of the SSH connectivity protocol. macOS comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

          $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

          To access the clusters and transfer your files, you will use the following commands:

          1. ssh-keygen: to generate the SSH key pair (public + private key);

          2. ssh: to open a shell on a remote machine;

          3. sftp: a secure equivalent of ftp;

          4. scp: a secure equivalent of the remote copy command rcp.

          "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

          A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

          ls ~/.ssh\n

          If a key-pair is already available, you would normally get:

          authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

          Otherwise, the command will show:

          ls: .ssh: No such file or directory\n

          You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

          You will need to generate a new key pair, when:

          1. you don't have a key pair yet

          2. you forgot the passphrase protecting your private key

          3. your private key was compromised

          4. your key pair is too short or not the right type

          For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

          ssh-keygen -t rsa -b 4096\n

          This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

          Without your key pair, you won't be able to apply for a personal VSC account.

          "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

          Most recent Unix derivatives include by default an SSH agent to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

          Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

          ssh-add\n

          Tip

          Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

          Check that your key is available from the keyring with:

          ssh-add -l\n

          After these changes the key agent will keep your SSH key to connect to the clusters as usual.

          Tip

          You should execute ssh-add command again if you generate a new SSH key.

          "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

          Visit https://account.vscentrum.be/

          You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

          Select \"Universiteit Antwerpen\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

          Click Confirm

          You will now be taken to the authentication page of your institute.

          The site is only accessible from within the University of Antwerp domain, so the page won't load from, e.g., home. However, you can also get external access to the University of Antwerp domain using VPN. We refer to the Pintra pages of the ICT Department for more information.

          "}, {"location": "account/#users-of-the-antwerp-university-association-auha", "title": "Users of the Antwerp University Association (AUHA)", "text": "

          All users (researchers, academic staff, etc.) from the higher education institutions associated with University of Antwerp can get a VSC account via the University of Antwerp. There is not yet an automated form to request your personal VSC account.

          Please e-mail the UAntwerpen-HPC staff to get an account (see Contacts information). You will have to provide a public ssh key generated as described above. Please attach your public key (i.e., the file named id_rsa.pub), which you will normally find in your .ssh subdirectory within your HOME Directory. (i.e., /Users/<username>/.ssh/id_rsa.pub).

          After you log in using your University of Antwerp login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

          This file has been stored in the directory \"~/.ssh/\".

          Tip

          As \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing Cmd+Shift+G (or Cmd+Shift+.), which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"~/.ssh\" and press enter.

          After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

          "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

          Within one day, you should receive a Welcome e-mail with your VSC account details.

          Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc20167\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

          Now, you can start using the UAntwerpen-HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

          "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

          In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

          1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

          2. Go to https://account.vscentrum.be/django/account/edit

          3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

          4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

          5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

          "}, {"location": "account/#computation-workflow-on-the-uantwerpen-hpc", "title": "Computation Workflow on the UAntwerpen-HPC", "text": "

          A typical Computation workflow will be:

          1. Connect to the UAntwerpen-HPC

          2. Transfer your files to the UAntwerpen-HPC

          3. Compile your code and test it

          4. Create a job script

          5. Submit your job

          6. Wait while

            1. your job gets into the queue

            2. your job gets executed

            3. your job finishes

          7. Move your results

          We'll take you through the different tasks one by one in the following chapters.

          "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

          AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

          See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

          "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

          This chapter focuses specifically on the use of AlphaFold on the UAntwerpen-HPC. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

          • AlphaFold website: https://alphafold.com/
          • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
          • AlphaFold FAQ: https://alphafold.com/faq
          • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
          • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
          • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
            • recording available on YouTube
            • slides available here (PDF)
            • see also https://www.vscentrum.be/alphafold
          "}, {"location": "alphafold/#using-alphafold-on-uantwerpen-hpc", "title": "Using AlphaFold on UAntwerpen-HPC", "text": "

          Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

          $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

          To use AlphaFold, you should load a particular module, for example:

          module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

          We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

          Warning

          When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

          Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

          $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

          The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

          As of writing this documentation the latest version is 20230310.

          Info

          The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

          The AlphaFold installations we provide have been modified a bit to facilitate the usage on UAntwerpen-HPC.

          "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

          The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

          export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

          Use newest version

          Do not forget to replace 20230310 with a more up to date version if available.

          "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

          AlphaFold provides a script called run_alphafold.py

          A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

          The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

          Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

          For more information about the script and options see this section in the official README.

          READ README

          It is strongly advised to read the official README provided by DeepMind before continuing.

          "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

          The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

          Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

          Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

          Info

          Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

          "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

          The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

          Using --db_preset=full_dbs, the following runtime data was collected:

          • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
          • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
          • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
          • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

          This highlights a couple of important attention points:

          • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
          • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
          • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

          With --db_preset=casp14, it is clearly more demanding:

          • On doduo, with 24 cores (1 node): still running after 48h...
          • On joltik, 1 V100 GPU + 8 cores: 4h 48min

          This highlights the difference between CPU and GPU performance even more.

          "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

          The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

          Do not forget to set up the environment (see above: Setting up the environment).

          "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

          Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

          >sequence_name\n<SEQUENCE>\n

          Then run the following command in the same directory:

          alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

          See AlphaFold output, for information about the outputs.

          Info

          For more scenarios see the example section in the official README.

          "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

          The following two example job scripts can be used as a starting point for running AlphaFold.

          The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

          To run the job scripts you need to create a file named T1050.fasta with the following content:

          >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
          source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

          "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

          Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

          Swap to the joltik GPU before submitting it:

          module swap cluster/joltik\n
          AlphaFold-gpu-joltik.sh
          #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
          "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

          Jobscript that runs AlphaFold on CPU using 24 cores on one node.

          AlphaFold-cpu-doduo.sh
          #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

          In case of problems or questions, don't hesitate to contact use at hpc@uantwerpen.be.

          "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

          Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

          This documentation only covers aspects of using Apptainer on the UAntwerpen-HPC infrastructure.

          "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

          The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via hpc@uantwerpen.be.

          "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

          Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the UAntwerpen-HPC infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

          Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

          # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
          "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

          Create a job script like:

          #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

          Create an example myscript.sh:

          #!/bin/bash\n\n# prime factors\nfactor 1234567\n
          "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
          #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before apptainer execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

          For example to compile an MPI example:

          module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

          Example MPI job script:

          #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
          "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
          1. Before starting, you should always check:

            • Are there any errors in the script?

            • Are the required modules loaded?

            • Is the correct executable used?

          2. Check your computer requirements upfront, and request the correct resources in your batch job script.

            • Number of requested cores

            • Amount of requested memory

            • Requested network type

          3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

          4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

          5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

          6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

          7. In case your job not running, use \"checkjob\". It will show why your job is not yet running. Sometimes commands might timeout with an overloaded scheduler.

          8. Submit your job and wait (be patient) ...

          9. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

          10. The runtime is limited by the maximum walltime of the queues.

          11. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

          12. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

          13. And above all, do not hesitate to contact the UAntwerpen-HPC staff at hpc@uantwerpen.be. We're here to help you.

          "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

          All nodes in the UAntwerpen-HPC cluster are running the \"CentOS Linux release 7.8.2003 (Core)\" Operating system, which is a specific version of RedHat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the UAntwerpen-HPC first must be compiled for CentOS Linux release 7.8.2003 (Core). It also means that you first have to install all the required external software packages on the UAntwerpen-HPC.

          Most commonly used compilers are already pre-installed on the UAntwerpen-HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

          "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-uantwerpen-hpc", "title": "Check the pre-installed software on the UAntwerpen-HPC", "text": "

          In order to check all the available modules and their version numbers, which are pre-installed on the UAntwerpen-HPC enter:

          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          When your required application is not available on the UAntwerpen-HPC please contact any UAntwerpen-HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

          "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

          To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., RedHat Enterprise Linux on our UAntwerpen-HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

          In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

          In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

          Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

          Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

          Porting your code to the CentOS Linux release 7.8.2003 (Core) platform is the responsibility of the end-user.

          "}, {"location": "compiling_your_software/#compiling-and-building-on-the-uantwerpen-hpc", "title": "Compiling and building on the UAntwerpen-HPC", "text": "

          Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

          All the UAntwerpen-HPC nodes run the same version of the Operating System, i.e. CentOS Linux release 7.8.2003 (Core). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

          A typical process looks like:

          1. Copy your software to the login-node of the UAntwerpen-HPC

          2. Start an interactive session on a compute node;

          3. Compile it;

          4. Test it locally;

          5. Generate your job scripts;

          6. Test it on the UAntwerpen-HPC

          7. Run it (in parallel);

          We assume you've copied your software to the UAntwerpen-HPC. The next step is to request your private compute node.

          $ qsub -I\nqsub: waiting for job 433253.leibniz to start\n
          "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

          Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

          We now list the directory and explore the contents of the \"hello.c\" program:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

          hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

          The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

          We first need to compile this C-file into an executable with the gcc-compiler.

          First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

          $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc20167 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc20167  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc20167  130 Sep 16 11:39 hello.pbs*\n

          A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

          Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

          $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

          It seems to work, now run it on the UAntwerpen-HPC

          qsub hello.pbs\n

          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          List the directory and explore the contents of the \"mpihello.c\" program:

          $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc20167 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc20167 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc20167 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc20167 304 Sep 16 13:55 mpihello.pbs\n

          mpihello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

          The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

          Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

          mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

          A new file \"hello\" has been created. Note that this program has \"execute\" rights.

          Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the UAntwerpen-HPC.

          qsub mpihello.pbs\n
          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

          We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

          module purge\nmodule load intel\n

          Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

          mpiicc -o mpihello mpihello.c\nls -l\n

          Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the UAntwerpen-HPC.

          qsub mpihello.pbs\n

          Note: The Antwerp University Association (AUHA) only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

          Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

          Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

          Before you can really start using the UAntwerpen-HPC clusters, there are several things you need to do or know:

          1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

          2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

          3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

          4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

          "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

          Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

          VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

          All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

          • Use an VPN connection to connect to University of Antwerp the network (recommended).

          • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your University of Antwerp account.

            • While this web connection is active new SSH sessions can be started.

            • Active SSH sessions will remain active even when this web page is closed.

          • Contact your HPC support team (via hpc@uantwerpen.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

          Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

          ssh_exchange_identification: read: Connection reset by peer\n
          "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

          The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

          If you have any issues connecting to the UAntwerpen-HPC after you've followed these steps, see Issues connecting to login node to troubleshoot. When connecting from outside Belgium, you need a VPN client to connect to the network first.

          "}, {"location": "connecting/#connect", "title": "Connect", "text": "

          Open up a terminal and enter the following command to connect to the UAntwerpen-HPC. You can open a terminal by navigation to Applications and then Utilities in the finder and open Terminal.app, or enter Terminal in Spotlight Search.

          ssh vsc20167@login.hpc.uantwerpen.be\n

          Here, user vsc20167 wants to make a connection to the \"Leibniz\" cluster at University of Antwerp via the login node \"login.hpc.uantwerpen.be\", so replace vsc20167 with your own VSC id in the above command.

          The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

          A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

          Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          In this case, use the -i option for the ssh command to specify the location of your private key. For example:

          ssh -i /home/example/my_keys\n

          Congratulations, you're on the UAntwerpen-HPC infrastructure now! To find out where you have landed you can print the current working directory:

          $ pwd\n/user/antwerpen/201/vsc20167\n

          Your new private home directory is \"/user/antwerpen/201/vsc20167\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the UAntwerpen-HPC.

          $ cd /apps/antwerpen/tutorials\n$ ls\nIntro-HPC/\n

          This directory currently contains all training material for the Introduction to the UAntwerpen-HPC. More relevant training material to work with the UAntwerpen-HPC can always be added later in this directory.

          You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

          As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

          $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

          This directory contains:

          1. This HPC Tutorial (in either a Mac, Linux or Windows version).

          2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

          cd examples\n

          Tip

          Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

          Tip

          For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

          The first action is to copy the contents of the UAntwerpen-HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n
          Upon connection, you will get a welcome message containing your last login timestamp and some pointers to information about the system. On Leibniz, the system will also show your disk quota.

          Last login: Mon Feb  2 17:58:13 2015 from mylaptop.uantwerpen.be\n\n---------------------------------------------------------------\n\nWelcome to LEIBNIZ !\n\nUseful links:\n  https://vscdocumentation.readthedocs.io\n  https://vscdocumentation.readthedocs.io/en/latest/antwerp/tier2_hardware.html\n  https://www.uantwerpen.be/hpc\n\nQuestions or problems? Do not hesitate and contact us:\n  hpc@uantwerpen.be\n\nHappy computing!\n\n---------------------------------------------------------------\n\nYour quota is:\n\n                   Block Limits\n   Filesystem       used      quota      limit    grace\n   user             740M         3G       3.3G     none\n   data           3.153G        25G      27.5G     none\n   scratch        12.38M        25G      27.5G     none\n   small          20.09M        25G      27.5G     none\n\n                   File Limits\n   Filesystem      files      quota      limit    grace\n   user            14471      20000      25000     none\n   data             5183     100000     150000     none\n   scratch            59     100000     150000     none\n   small            1389     100000     110000     none\n\n---------------------------------------------------------------\n

          You can exit the connection at anytime by entering:

          $ exit\nlogout\nConnection to login.hpc.uantwerpen.be closed.\n

          tip: Setting your Language right

          You may encounter a warning message similar to the following one during connecting:

          perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
          or any other error message complaining about the locale.

          This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

          LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

          A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier. Open the .bashrc on your local machine with your favourite editor and add the following lines:

          $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

          tip: vi

          To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

          or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

          echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

          You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

          "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

          Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. macOS ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

          Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the UAntwerpen-HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

          It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

          Open an additional terminal window and check that you're working on your local machine.

          $ hostname\n<local-machine-name>\n

          If you're still using the terminal that is connected to the UAntwerpen-HPC, close the connection by typing \"exit\" in the terminal window.

          For example, we will copy the (local) file \"localfile.txt\" to your home directory on the UAntwerpen-HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc20167\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc20167@login.hpc.uantwerpen.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

          $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc20167@login.hpc.uantwerpen.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

          Connect to the UAntwerpen-HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

          $ pwd\n/user/antwerpen/201/vsc20167\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

          The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-macOS-Antwerpen.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

          First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

          $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc20167 Sep 11 09:53 intro-HPC-macOS-Antwerpen.pdf\n

          Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

          $ scp vsc20167@login.hpc.uantwerpen.be:./docs/intro-HPC-macOS-Antwerpen.pdf .\nintro-HPC-macOS-Antwerpen.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

          The file has been copied from the HPC to your local computer.

          It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

          scp -r dataset vsc20167@login.hpc.uantwerpen.be:scratch\n

          If you don't use the -r option to copy a directory, you will run into the following error:

          $ scp dataset vsc20167@login.hpc.uantwerpen.be:scratch\ndataset: not a regular file\n
          "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

          The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

          The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

          One easy way of starting a sftp session is

          sftp vsc20167@login.hpc.uantwerpen.be\n

          Typical and popular commands inside an sftp session are:

          cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the UAntwerpen-HPC remote machine) ls Get a list of the files in the current directory on the UAntwerpen-HPC. get fibo.py Copy the file \"fibo.py\" from the UAntwerpen-HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the UAntwerpen-HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the UAntwerpen-HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the UAntwerpen-HPC."}, {"location": "connecting/#using-a-gui-cyberduck", "title": "Using a GUI (Cyberduck)", "text": "

          Cyberduck is a graphical alternative to the scp command. It can be installed from https://cyberduck.io.

          This is the one-time setup you will need to do before connecting:

          1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the \"+\" sign on the bottom left of the window. A new window will open.

          2. In the drop-down menu on top, select \"SFTP (SSH File Transfer Protocol)\".

          3. In the \"Server\" field, type in login.hpc.uantwerpen.be. In the \"Username\" field, type in your VSC account id (this looks like vsc20167).

          4. Select the location of your SSH private key in the \"SSH Private Key\" field.

          5. Finally, type in a name for the bookmark in the \"Nickname\" field and close the window by pressing on the red circle in the top left corner of the window.

          To open the connection, click on the \"Bookmarks\" icon (which resembles an open book) and double-click on the bookmark you just created.

          "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

          See the section on rsync in chapter 5 of the Linux intro manual.

          "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

          It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

          For instance, if you want to switch to the login node named ln2.leibniz.uantwerpen.vsc, you can use the following command while you are connected to the ln1.leibniz.uantwerpen.vsc login node on the HPC:

          ssh ln2.leibniz.uantwerpen.vsc\n
          This is also possible the other way around.

          If you want to find out which login host you are connected to, you can use the hostname command.

          $ hostname\nln2.leibniz.uantwerpen.vsc\n$ ssh ln1.leibniz.uantwerpen.vsc\n\n$ hostname\nln1.leibniz.uantwerpen.vsc\n

          Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

          • screen
          • tmux
          "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

          It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

          In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

          Check if any cron script is already set in the current login node with:

          crontab -l\n

          At this point you can add/edit (with vi editor) any cron script running the command:

          crontab -e\n
          "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
           15 5 * * * ~/runscript.sh >& ~/job.out\n

          where runscript.sh has these lines in this example:

          runscript.sh
          #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

          In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

          Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

          ssh gligar07    # or gligar08\n
          "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

          You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

          EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

          "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

          For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

          • applying custom patches to the software that only you or your group are using

          • evaluating new software versions prior to requesting a central software installation

          • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

          "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

          Before you use EasyBuild, you need to configure it:

          "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

          This is where EasyBuild can find software sources:

          EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
          • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

          • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

          "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

          This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

          export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

          On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

          "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

          This is where EasyBuild will install the software (and accompanying modules) to.

          For example, to let it use $VSC_DATA/easybuild, use:

          export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

          Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

          Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

          To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

          "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

          Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

          module load EasyBuild\n
          "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

          EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

          $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

          For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

          eb example-1.2.1-foss-2024a.eb --robot\n
          "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

          To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

          To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

          eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

          To try to install example v1.2.5 with a different compiler toolchain:

          eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
          "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

          To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

          "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

          To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

          module use $EASYBUILD_INSTALLPATH/modules/all\n

          It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

          "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

          As UAntwerpen-HPC system administrators, we often observe that the UAntwerpen-HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

          Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block UAntwerpen-HPC resources for other users.

          Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the UAntwerpen-HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

          There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The UAntwerpen-HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

          Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

          Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

          This chapter shows you how to measure:

          1. Walltime
          2. Memory usage
          3. CPU usage
          4. Disk (storage) needs
          5. Network bottlenecks

          First, we allocate a compute node and move to our relevant directory:

          qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

          One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

          The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

          Test the time command:

          $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

          It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

          It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

          The walltime can be specified in a job scripts as:

          #PBS -l walltime=3:00:00:00\n

          or on the command line

          qsub -l walltime=3:00:00:00\n

          It is recommended to always specify the walltime for a job.

          "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

          In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

          The \"eat_mem\" application in the HPC examples directory just consumes and then releases memory, for the purpose of this test. It has one parameter, the amount of gigabytes of memory which needs to be allocated.

          First compile the program on your machine and then test it for 1 GB:

          $ gcc -o eat_mem eat_mem.c\n$ ./eat_mem 1\nConsuming 1 gigabyte of memory.\n
          "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

          The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

          $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

          Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

          It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

          "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

          The \"Monitor\" tool monitors applications in terms of memory and CPU usage, as well as the size of temporary files. Note that currently only single node jobs are supported, MPI support may be added in a future release.

          To start using monitor, first load the appropriate module. Then we study the \"eat_mem.c\" program and compile it:

          $ module load monitor\n$ cat eat_mem.c\n$ gcc -o eat_mem eat_mem.c\n

          Starting a program to monitor is very straightforward; you just add the \"monitor\" command before the regular command line.

          $ monitor ./eat_mem 3\ntime (s) size (kb) %mem %cpu\nConsuming 3 gigabyte of memory.\n5  252900 1.4 0.6\n10  498592 2.9 0.3\n15  743256 4.4 0.3\n20  988948 5.9 0.3\n25  1233612 7.4 0.3\n30  1479304 8.9 0.2\n35  1723968 10.4 0.2\n40  1969660 11.9 0.2\n45  2214324 13.4 0.2\n50  2460016 14.9 0.2\n55  2704680 16.4 0.2\n60  2950372 17.9 0.2\n65  3167280 19.2 0.2\n70  3167280 19.2 0.2\n75  9264  0 0.5\n80  9264  0 0.4\n

          Whereby:

          1. The first column shows you the elapsed time in seconds. By default, all values will be displayed every 5\u00a0seconds.
          2. The second column shows you the used memory in kb. We note that the memory slowly increases up to just over 3\u00a0GB (3GB is 3,145,728\u00a0KB), and is released again.
          3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. With the free command, we have previously seen that we had a node of 16\u00a0GB in this example. 3\u00a0GB is indeed more or less 19.2% of the full available memory.
          4. The fourth column shows you the CPU utilisation, expressed in percentages of a full CPU load. As there are no computations done in our exercise, the value remains very low (i.e.\u00a00.2%).

          Monitor will write the CPU usage and memory consumption of simulation to standard error.

          This is the rate at which monitor samples the program's metrics. Since monitor's output may interfere with that of the program to monitor, it is often convenient to use a\u00a0log file. The latter can be specified as follows:

          $ monitor -l test1.log eat_mem 2\nConsuming 2 gigabyte of memory.\n$ cat test1.log\n

          For long-running programs, it may be convenient to limit the output to, e.g., the last minute of the programs' execution. Since monitor provides metrics every 5 seconds, this implies we want to limit the output to the last 12 values to cover a minute:

          $ monitor -l test2.log -n 12 eat_mem 4\nConsuming 4 gigabyte of memory.\n

          Note that this option is only available when monitor writes its metrics to a\u00a0log file, not when standard error is used.

          The interval at\u00a0which monitor will show the metrics can be modified by specifying delta, the sample rate:

          $ monitor -d 1 ./eat_mem\nConsuming 3 gigabyte of memory.\n

          Monitor will now print the program's metrics every second. Note that the\u00a0minimum delta value is 1\u00a0second.

          Alternative options to monitor the memory consumption are the \"top\" or the \"htop\" command.

          top

          provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

          htop

          is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

          "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

          Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

          Sequential or single-node applications:

          The maximum amount of physical memory used by the job can be specified in a job script as:

          #PBS -l mem=4gb\n

          or on the command line

          qsub -l mem=4gb\n

          This setting is ignored if the number of nodes is not\u00a01.

          Parallel or multi-node applications:

          When you are running a parallel application over multiple cores, you can also specify the memory requirements per processor (pmem). This directive specifies the maximum amount of physical memory used by any process in the job.

          For example, if the job would run four processes and each would use up to 2 GB (gigabytes) of memory, then the memory directive would read:

          #PBS -l pmem=2gb\n

          or on the command line

          $ qsub -l pmem=2gb\n

          (and of course this would need to be combined with a CPU cores directive such as nodes=1:ppn=4). In this example, you request 8\u00a0GB of memory in total on the node.

          "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

          Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

          "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

          The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

          The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

          $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

          Or if you want to see it in a more readable format, execute:

          $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

          Note

          Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

          In order to specify the number of nodes and the number of processors per node in your job script, use:

          #PBS -l nodes=N:ppn=M\n

          or with equivalent parameters on the command line

          qsub -l nodes=N:ppn=M\n

          This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

          Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

          The previously used \"monitor\" tool also shows the overall CPU-load. The \"eat_cpu\" program performs a multiplication of 2 randomly filled a (1500 \\times 1500) matrices and is just written to consume a lot of \"cpu\".

          We first load the monitor modules, study the \"eat_cpu.c\" program and compile it:

          $ module load monitor\n$ cat eat_cpu.c\n$ gcc -o eat_cpu eat_cpu.c\n

          And then start to monitor the eat_cpu program:

          $ monitor -d 1 ./eat_cpu\ntime  (s) size (kb) %mem %cpu\n1  52852  0.3 100\n2  52852  0.3 100\n3  52852  0.3 100\n4  52852  0.3 100\n5  52852  0.3  99\n6  52852  0.3 100\n7  52852  0.3 100\n8  52852  0.3 100\n

          We notice that it the program keeps its CPU nicely busy at 100%.

          Some processes spawn one or more sub-processes. In that case, the metrics shown by monitor are aggregated over the process and all of its sub-processes (recursively). The reported CPU usage is the sum of all these processes, and can thus exceed 100%.

          Some (well, since this is a UAntwerpen-HPC Cluster, we hope most) programs use more than one core to perform their computations. Hence, it should not come as a surprise that the CPU usage is reported as larger than 100%. When programs of this type are running on a computer with n cores, the CPU usage can go up to (\\text{n} \\times 100\\%).

          This could also be monitored with the htop command:

          htop\n
          Example output:
            1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

          The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

          If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
          2. Develop your parallel program in a smart way.
          3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          4. Correct your request for CPUs in your job script.
          "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

          On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

          The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

          The load averages differ from CPU percentage in two significant ways:

          1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
          2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
          "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

          What is the \"optimal load\" rule of thumb?

          The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

          In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

          Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

          The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

          1. When you are running computational intensive applications, one application per processor will generate the optimal load.
          2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

          The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the UAntwerpen-HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The UAntwerpen-HPC scheduler will not launch more than one process per core.

          The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

          The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

          The uptime command will show us the average load

          $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

          Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

          $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
          You can also read it in the htop command.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the UAntwerpen-HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Profile your software to improve its performance.
          2. Configure your software (e.g., to exactly use the available amount of processors in a node).
          3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
          4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          5. Correct your request for CPUs in your job script.

          And then check again.

          "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

          Some programs generate intermediate or output files, the size of which may also be a useful metric.

          Remember that your available disk space on the UAntwerpen-HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

          We first load the monitor modules, study the \"eat_disk.c\" program and compile it:

          $ module load monitor\n$ cat eat_disk.c\n$ gcc -o eat_disk eat_disk.c\n

          The monitor tool provides an option (-f) to display the size of one or more files:

          $ monitor -f $VSC_SCRATCH/test.txt ./eat_disk\ntime (s) size (kb) %mem %cpu\n5  1276  0 38.6 168820736\n10  1276  0 24.8 238026752\n15  1276  0 22.8 318767104\n20  1276  0 25 456130560\n25  1276  0 26.9 614465536\n30  1276  0 27.7 760217600\n...\n

          Here, the size of the file \"test.txt\" in directory $VSC_SCRATCH will be monitored. Files can be specified by absolute as well as relative path, and multiple files are separated by \",\".

          It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

          Several actions can be taken, to avoid storage problems:

          1. Be aware of all the files that are generated by your program. Also check out the hidden files.
          2. Check your quota consumption regularly.
          3. Clean up your files regularly.
          4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
          5. Make sure your programs clean up their temporary files after execution.
          6. Move your output results to your own computer regularly.
          7. Anyone can request more disk space to the UAntwerpen-HPC staff, but you will have to duly justify your request.
          "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

          Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

          Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

          The parameter to add in your job script would be:

          #PBS -l ib\n

          If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

          #PBS -l gbe\n
          "}, {"location": "fine_tuning_job_specifications/#some-more-tips-on-the-monitor-tool", "title": "Some more tips on the Monitor tool", "text": ""}, {"location": "fine_tuning_job_specifications/#command-lines-arguments", "title": "Command Lines arguments", "text": "

          Many programs, e.g., MATLAB, take command line options. To make sure these do not interfere with those of monitor and vice versa, the program can for instance be started in the following way:

          $ monitor -delta 60 -- matlab -nojvm -nodisplay computation.m\n

          The use of -- will ensure that monitor does not get confused by MATLAB's -nojvm and -nodisplay options.

          "}, {"location": "fine_tuning_job_specifications/#exit-code", "title": "Exit Code", "text": "

          Monitor will propagate the exit code of the program it is watching. Suppose the latter ends normally, then monitor's exit code will be 0. On the other hand, when the program terminates abnormally with a non-zero exit code, e.g., 3, then this will be monitor's exit code as well.

          When monitor terminates in an abnormal state, for instance if it can't create the log file, its exit code will be 65. If this interferes with an exit code of the program to be monitored, it can be modified by setting the environment variable MONITOR_EXIT_ERROR to a more suitable value.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-a-running-process", "title": "Monitoring a running process", "text": "

          It is also possible to \"attach\" monitor to a program or process that is already running. One simply determines the relevant process ID using the ps command, e.g., 18749, and starts monitor:

          $ monitor -p 18749\n

          Note that this feature can be (ab)used to monitor specific sub-processes.

          "}, {"location": "getting_started/", "title": "Getting Started", "text": "

          Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the UAntwerpen-HPC and submitting your very first job. We'll also walk you through the process step by step using a practical example.

          In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

          Before proceeding, read the introduction to HPC to gain an understanding of the UAntwerpen-HPC and related terminology.

          "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

          To get access to the UAntwerpen-HPC, visit Getting an HPC Account.

          If you have not used Linux before, please learn some basics first before continuing. (see Appendix C - Useful Linux Commands)

          "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
          1. Connect to the login nodes
          2. Transfer your files to the UAntwerpen-HPC
          3. Optional: compile your code and test it
          4. Create a job script and submit your job
          5. Wait for job to be executed
          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

          "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

          There are two options to connect

          • Using a terminal to connect via SSH (for power users) (see First Time connection to the UAntwerpen-HPC)
          • Using the web portal

          Considering your operating system is macOS, it should be easy to make use of the ssh command in a terminal, but the web portal will work too.

          The web portal offers a convenient way to upload files and gain shell access to the UAntwerpen-HPC from a standard web browser (no software installation or configuration required).

          See shell access when using the web portal, or connection to the UAntwerpen-HPC when using a terminal.

          Make sure you can get to a shell access to the UAntwerpen-HPC before proceeding with the next steps.

          Info

          When having problems see the connection issues section on the troubleshooting page.

          "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

          Now that you can login, it is time to transfer files from your local computer to your home directory on the UAntwerpen-HPC.

          Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

          On your local machine you can run:

          curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

          Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

          scp tensorflow_mnist.py run.sh vsc20167login.hpc.uantwerpen.be:~\n

          ssh  vsc20167@login.hpc.uantwerpen.be\n

          User your own VSC account id

          Replace vsc20167 with your VSC account id (see https://account.vscentrum.be)

          Info

          For more information about transfering files or scp, see tranfer files from/to hpc.

          When running ls in your session on the UAntwerpen-HPC, you should see the two files listed in your home directory (~):

          $ ls ~\nrun.sh tensorflow_mnist.py\n

          When you do not see these files, make sure you uploaded the files to your home directory.

          "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

          Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

          A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

          Our job script looks like this:

          run.sh

          #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
          As you can see this job script will run the Python script named tensorflow_mnist.py.

          The jobs you submit are per default executed on cluser/{{ defaultcluster }}, you can swap to another cluster by issuing the following command.

          module swap cluster/{{ othercluster }}\n

          Tip

          When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

          $ qsub run.sh\n433253.leibniz\n

          This command returns a job identifier (433253.leibniz) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

          Make sure you understand what the module command does

          Note that the module commands only modify environment variables. For instance, running module swap cluster/{{ othercluster }} will update your shell environment so that qsub submits a job to the {{ othercluster }} cluster, but our active shell session is still running on the login node.

          It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

          When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like {{ othercluster }}).

          For detailed information about module commands, read the running batch jobs chapter.

          "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

          Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

          You can get an overview of the active jobs using the qstat command:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:00  Q {{ othercluster }}\n

          Eventually, after entering qstat again you should see that your job has started running:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n433253.leibniz     run.sh           vsc20167        0:00:01  R {{ othercluster }}\n

          If you don't see your job in the output of the qstat command anymore, your job has likely completed.

          Read this section on how to interpret the output.

          "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

          When your job finishes it generates 2 output files:

          • One for normal output messages (stdout output channel).
          • One for warning and error messages (stderr output channel).

          By default located in the directory where you issued qsub.

          In our example when running ls in the current directory you should see 2 new files:

          • run.sh.o433253.leibniz, containing normal output messages produced by job 433253.leibniz;
          • run.sh.e433253.leibniz, containing errors and warnings produced by job 433253.leibniz.

          Info

          run.sh.e433253.leibniz should be empty (no errors or warnings).

          Use your own job ID

          Replace 433253.leibniz with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

          When examining the contents of run.sh.o433253.leibniz you will see something like this:

          Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

          Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

          Warning

          When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

          For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

          "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
          • Running interactive jobs
          • Running jobs with input/output data
          • Multi core jobs/Parallel Computing
          • Interactive and debug cluster

          For more examples see Program examples and Job script examples

          "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

          module swap cluster/joltik\n

          To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

          module swap cluster/accelgor\n

          Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

          "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

          To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

          Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@uantwerpen.be.

          "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

          See https://www.ugent.be/hpc/en/infrastructure.

          "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

          There are 2 main ways to ask for GPUs as part of a job:

          • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

          • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

          Some background:

          • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

          • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

          "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

          Some important attention points:

          • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

          • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

          • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

          • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

          "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

          Use module avail to check for centrally installed software.

          The subsections below only cover a couple of installed software packages, more are available.

          "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

          Please consult module avail GROMACS for a list of installed versions.

          "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

          Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

          Please consult module avail Horovod for a list of installed versions.

          Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

          At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

          "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

          Please consult module avail PyTorch for a list of installed versions.

          "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

          Please consult module avail TensorFlow for a list of installed versions.

          Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

          "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
          #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
          "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

          Please consult module avail AlphaFold for a list of installed versions.

          For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

          "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

          In case of questions or problems, please contact the UAntwerpen-HPC via hpc@uantwerpen.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

          "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

          The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

          This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

          Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

          • Interactive jobs (see chapter\u00a0Running interactive jobs)

          • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

          • Jobs requiring few resources

          • Debugging programs

          • Testing and debugging job scripts

          "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

          module swap cluster/donphan\n

          Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

          "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

          Some limits are in place for this cluster:

          • each user may have at most 5 jobs in the queue (both running and waiting to run);

          • at most 3 jobs per user can be running at the same time;

          • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

          In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

          Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

          "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

          Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

          All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

          "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

          \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

          While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

          A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

          The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

          Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

          Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

          "}, {"location": "introduction/#what-is-the-uantwerpen-hpc", "title": "What is the UAntwerpen-HPC?", "text": "

          The UAntwerpen-HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

          The UAntwerpen-HPC relies on parallel-processing technology to offer University of Antwerp researchers an extremely fast solution for all their data processing needs.

          The UAntwerpen-HPC consists of:

          In technical terms ... in human terms over 280 nodes and over 11000 cores ...\u00a0or the equivalent of 2750 quad-core PCs over 500 Terabyte of online storage ...\u00a0or the equivalent of over 60000 DVDs up to 100 Gbit InfiniBand fiber connections ...\u00a0or allowing to transfer 3 DVDs per second

          The UAntwerpen-HPC currently consists of:

          Leibniz:

          1. 144 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM, 120 GB local disk

          2. 8 compute nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM, 120 GB local disk

          3. 24 \"hopper\" compute nodes (recovered from the former Hopper cluster) with 2 ten core Intel E5-2680v2 CPUs (Ivy Bridge generation, 2.8 GHz), 256 GB memory, 500 GB local disk

          4. 2 GPGPU nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU, 120 GB local disk

          5. 1 vector computing node with 1 12-core Intel Xeon Gold 6126 (Skylake generation, 2.6 GHz), 96 GB RAM and 2 NEC SX-Aurora Vector Engines type 10B (per card 8 cores @1.4 GHz, 48 GB HBM2), 240 GB local disk

          6. 1 Xeon Phi node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM, 120 GB local disk

          7. 1 visualisation node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 256 GB RAM and with a NVIDIA P5000 GPU, 120 GB local disk

          The nodes are connected using an InfiniBand EDR network except for the \"hopper\" compute nodes that utilize FDR10 InfiniBand.

          Vaughan:

          1. 104 compute nodes with 2 32-core AMD Epyc 7452 (2.35 GHz) and 256 GB RAM, 240 GB local disk

          The nodes are connected using an InfiniBand HDR100 network.

          All the nodes in the UAntwerpen-HPC run under the \"CentOS Linux release 7.8.2003 (Core)\" operating system, which is a clone of \"RedHat Enterprise Linux\", with cgroups support.

          Two tools perform the Job management and job scheduling:

          1. TORQUE: a resource manager (based on PBS);

          2. Moab: job scheduler and management tools.

          For maintenance and monitoring, we use:

          1. Ganglia: monitoring software;

          2. Icinga and Nagios: alert manager.

          "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

          The HPC infrastructure is not a magic computer that automatically:

          1. runs your PC-applications much faster for bigger problems;

          2. develops your applications;

          3. solves your bugs;

          4. does your thinking;

          5. ...

          6. allows you to play games even faster.

          The UAntwerpen-HPC does not replace your desktop computer.

          "}, {"location": "introduction/#is-the-uantwerpen-hpc-a-solution-for-my-computational-needs", "title": "Is the UAntwerpen-HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

          Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

          It is also possible to run programs at the UAntwerpen-HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the UAntwerpen-HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the UAntwerpen-HPC staff can unveil whether the UAntwerpen-HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

          "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

          In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

          Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

          "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

          Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

          Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

          The two parallel programming paradigms most used in HPC are:

          • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

          • MPI for distributed memory systems (multiprocessing): on multiple nodes

          Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

          "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

          Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

          It is perfectly possible to also run purely sequential programs on the UAntwerpen-HPC.

          Running your sequential programs on the most modern and fastest computers in the UAntwerpen-HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the UAntwerpen-HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

          "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

          You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, CentOS Linux release 7.8.2003 (Core).

          For the most common programming languages, a compiler is available on CentOS Linux release 7.8.2003 (Core). Supported and common programming languages on the UAntwerpen-HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

          Supported and commonly used compilers are GCC, clang, J2EE and Intel

          Commonly used software packages are:

          • in bioinformatics: beagle, Beast, bowtie, MrBayes, SAMtools

          • in chemistry: ABINIT, CP2K, Gaussian, Gromacs, LAMMPS, NWChem, Quantum Espresso, Siesta, VASP

          • in engineering: COMSOL, OpenFOAM, Telemac

          • in mathematics: JAGS, MATLAB, R

          • for visuzalization: Gnuplot, ParaView.

          Commonly used libraries are Intel MKL, FFTW, HDF5, PETSc and Intel MPI, OpenMPI. Additional software can be installed \"on demand\". Please contact the UAntwerpen-HPC staff to see whether the UAntwerpen-HPC can handle your specific requirements.

          "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

          All nodes in the UAntwerpen-HPC cluster run under CentOS Linux release 7.8.2003 (Core), which is a specific version of RedHat Enterprise Linux. This means that all programs (executables) should be compiled for CentOS Linux release 7.8.2003 (Core).

          Users can connect from any computer in the University of Antwerp network to the UAntwerpen-HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the UAntwerpen-HPC.

          A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

          "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

          A typical workflow looks like:

          1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

          2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

          3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

          4. Create a job script and submit your job (see Running batch jobs)

          5. Get some coffee and be patient:

            1. Your job gets into the queue

            2. Your job gets executed

            3. Your job finishes

          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

          When you think that the UAntwerpen-HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the UAntwerpen-HPC cluster.

          Do not hesitate to contact the UAntwerpen-HPC staff for any help.

          1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

          "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

          This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

          • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

          • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

          • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

          • To use a situational parameter, remove one '#' at the beginning of the line.

          simple_jobscript.sh
          #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
          "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

          Here's an example of a single-core job script:

          single_core.sh
          #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
          1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

          2. A module for Python 3.6 is loaded, see also section Modules.

          3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

          4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

          5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

          "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

          Here's an example of a multi-core job script that uses mympirun:

          multi_core.sh
          #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

          An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

          "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

          If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

          This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

          timeout.sh
          #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

          The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

          example_program.sh
          #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
          "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

          A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

          "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

          Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

          After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

          When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

          and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

          This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

          "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

          A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

          To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

          We can see all available versions of the SciPy module by using module avail SciPy-bundle:

          $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

          Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

          Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

          $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

          The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

          It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
          This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

          If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

          Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

          "}, {"location": "known_issues/", "title": "Known issues", "text": "

          This page provides details on a couple of known problems, and the workarounds that are available for them.

          If you have any questions related to these issues, please contact the UAntwerpen-HPC.

          • Operation not permitted error for MPI applications
          "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

          When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

          Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

          This error means that an internal problem has occurred in OpenMPI.

          "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

          This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

          It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

          "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

          We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

          "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

          A workaround as been implemented in mympirun (version 5.4.0).

          Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

          module load vsc-mympirun\n

          and launch your MPI application using the mympirun command.

          For more information, see the mympirun documentation.

          "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

          If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

          export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
          "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

          We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

          "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

          There are two important motivations to engage in parallel programming.

          1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

          2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

          On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

          There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

          Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for UAntwerpen-HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

          Tip

          You can request more nodes/cores by adding following line to your run script.

          #PBS -l nodes=2:ppn=10\n
          This queues a job that claims 2 nodes and 10 cores.

          Warning

          Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

          Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

          This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

          Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

          Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

          Go to the example directory:

          cd ~/examples/Multi-core-jobs-Parallel-Computing\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          Study the example first:

          T_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

          And compile it (whilst including the thread library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Now, run it on the cluster and check the output:

          $ qsub T_hello.pbs\n433253.leibniz\n$ more T_hello.pbs.o433253.leibniz\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Tip

          If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

          OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

          An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

          Here is the general code structure of an OpenMP program:

          #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

          "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

          By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

          "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

          Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

          omp1.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

          Now run it in the cluster and check the result again.

          $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
          "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

          Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

          omp2.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

          Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

          omp3.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

          There are a host of other directives you can issue using OpenMP.

          Some other clauses of interest are:

          1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

          2. nowait: threads will not wait until everybody is finished

          3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

          4. if: allows you to parallelise only if a certain condition is met

          5. ...\u00a0and a host of others

          Tip

          If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

          The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

          In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

          The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

          The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

          One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

          Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

          Study the MPI-programme and the PBS-file:

          mpi_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
          mpi_hello.pbs
          #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

          and compile it:

          $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

          mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

          Run the parallel program:

          $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc20167 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc20167 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc20167    0 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw------- 1 vsc20167  697 Sep 16 14:22 mpi_hello.o433253.leibniz\n-rw-r--r-- 1 vsc20167  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o433253.leibniz\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

          The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

          MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

          Tip

          If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

          "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

          A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

          Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

          These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

          One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

          The \"Worker framework\" has been developed to address this issue.

          It can handle many small jobs determined by:

          parameter variations

          i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

          job arrays

          i.e., each individual job got a unique numeric identifier.

          Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

          However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

          "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/par_sweep\n

          Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

          $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

          For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

          par_sweep/weather
          #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

          A job script that would run this as a job for the first parameters (p01) would then look like:

          par_sweep/weather_p01.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

          When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

          To submit the job, the user would use:

           $ qsub weather_p01.pbs\n
          However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

          $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

          It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

          In order to make our PBS generic, the PBS file can be modified as follows:

          par_sweep/weather.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

          Note that:

          1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

          2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

          3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

          The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

          The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

          $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n433253.leibniz\n

          Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

          Warning

          When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

          module swap env/slurm/donphan\n

          instead of

          module swap cluster/donphan\n
          We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

          "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/job_array\n

          As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

          The following bash script would submit these jobs all one by one:

          #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

          This, as said before, could be disturbing for the job scheduler.

          Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

          Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

          The details are

          1. a job is submitted for each number in the range;

          2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

          3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

          The job could have been submitted using:

          qsub -t 1-100 my_prog.pbs\n

          The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

          To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

          A typical job script for use with job arrays would look like this:

          job_array/job_array.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

          Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

          $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

          For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

          job_array/test_set
          #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

          Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

          job_array/test_set.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          Note that

          1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

          2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

          The job is now submitted as follows:

          $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n433253.leibniz\n

          The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

          Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

          $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n433253.leibniz  test_set.pbs  vsc20167          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
          "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

          Often, an embarrassingly parallel computation can be abstracted to three simple steps:

          1. a preparation phase in which the data is split up into smaller, more manageable chunks;

          2. on these chunks, the same algorithm is applied independently (these are the work items); and

          3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

          The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

          cd ~/examples/Multi-job-submission/map_reduce\n

          The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

          First study the scripts:

          map_reduce/pre.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
          map_reduce/post.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

          Then one can submit a MapReduce style job as follows:

          $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n433253.leibniz\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

          Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

          "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

          The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

          The \"Worker Framework\" will be effective when

          1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

          2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

          "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

          Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log433253.leibniz, assuming the job's ID is 433253.leibniz. To keep an eye on the progress, one can use:

          tail -f run.pbs.log433253.leibniz\n

          Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

          watch -n 60 wsummarize run.pbs.log433253.leibniz\n

          This will summarise the log file every 60 seconds.

          "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

          Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

          Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

          Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

          "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

          Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

          wresume -jobid 433253.leibniz\n

          This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

          wresume -l walltime=1:30:00 -jobid 433253.leibniz\n

          Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

          wresume -jobid 433253.leibniz -retry\n

          By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

          "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

          This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

          $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
          "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

          When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

          to check for the available versions of worker, use the following command:

          $ module avail worker\n
          1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

          "}, {"location": "mympirun/", "title": "Mympirun", "text": "

          mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

          In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

          "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

          Before using mympirun, we first need to load its module:

          module load vsc-mympirun\n

          As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

          The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

          For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

          "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

          There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

          By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

          "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

          This is the most commonly used option for controlling the number of processing.

          The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

          $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
          "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

          There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

          See vsc-mympirun README for a detailed explanation of these options.

          "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

          You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

          $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
          "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

          In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC UAntwerpen-HPC infrastructure.

          "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

          There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

          • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

            • see also http://openfoam.com/history/
          • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

            • see also https://openfoam.org/download/history/
          • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

          Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

          "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

          The best practices outlined here focus specifically on the use of OpenFOAM on the VSC UAntwerpen-HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

          • OpenFOAM websites:

            • https://openfoam.com

            • https://openfoam.org

            • http://wikki.gridcore.se/foam-extend

          • OpenFOAM user guides:

            • https://www.openfoam.com/documentation/user-guide

            • https://cfd.direct/openfoam/user-guide/

          • OpenFOAM C++ source code guide: https://cpp.openfoam.org

          • tutorials: https://wiki.openfoam.com/Tutorials

          • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

          Other useful OpenFOAM documentation:

          • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

          • http://www.dicat.unige.it/guerrero/openfoam.html

          "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

          To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

          "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

          First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

          $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

          To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

          To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

          module load OpenFOAM/11-foss-2023a\n
          "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

          OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

          source $FOAM_BASH\n
          "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

          If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

          source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

          Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

          "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

          If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

          unset $FOAM_SIGFPE\n

          Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

          As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

          "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

          The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

          • generate the mesh;

          • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

          After running the simulation, some post-processing steps are typically performed:

          • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

          • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

          Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

          Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

          One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

          For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

          "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

          For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

          "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

          When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

          You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

          "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

          It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

          See Basic usage for how to get started with mympirun.

          To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

          export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

          Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

          "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

          To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

          Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

          number of processor directories = 4 is not equal to the number of processors = 16\n

          In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

          • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

          • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

          See Controlling number of processes to control the number of process mympirun will start.

          This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

          To visualise the processor domains, use the following command:

          mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

          and then load the VTK files generated in the VTK folder into ParaView.

          "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

          OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

          Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

          • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

          • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

          • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

          • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

          • if the results per individual time step are large, consider setting writeCompression to true;

          For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

          These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

          "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

          See https://cfd.direct/openfoam/user-guide/compiling-applications/.

          "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

          Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

          OpenFOAM_damBreak.sh
          #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
          "}, {"location": "program_examples/", "title": "Program examples", "text": "

          If you have not done so already copy our examples to your home directory by running the following command:

           cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          ~(tilde) refers to your home directory, the directory you arrive by default when you login.

          Go to our examples:

          cd ~/examples/Program-examples\n

          Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

          1. 01_Python

          2. 02_C_C++

          3. 03_Matlab

          4. 04_MPI_C

          5. 05a_OMP_C

          6. 05b_OMP_FORTRAN

          7. 06_NWChem

          8. 07_Wien2k

          9. 08_Gaussian

          10. 09_Fortran

          11. 10_PQS

          The above 2 OMP directories contain the following examples:

          C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

          Compile by any of the following commands:

          Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

          Be invited to explore the examples.

          "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

          Remember to substitute the usernames, login nodes, file names, ...for your own.

          Login Login ssh vsc20167@login.hpc.uantwerpen.be Where am I? hostname Copy to UAntwerpen-HPC scp foo.txt vsc20167@login.hpc.uantwerpen.be: Copy from UAntwerpen-HPC scp vsc20167@login.hpc.uantwerpen.be:foo.txt Setup ftp session sftp vsc20167@login.hpc.uantwerpen.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 showstart 12345 Possible start time of job with ID 12345 (not available everywhere) checkjob 12345 Check job with ID 12345 (not available everywhere) qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on showq Show all jobs on queue (not available everywhere) qsub -I Submit Interactive job Disk quota Check your disk quota mmlsquota Check your disk quota nice show_quota.py Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

          Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

          "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

          Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

          This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

          It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

          "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

          As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

          For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

          $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

          Initially there will be only one RHEL 9 login node. As needed a second one will be added.

          When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

          "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

          To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

          This includes (per user):

          • max. of 2 CPU cores in use
          • max. 8 GB of memory in use

          For more intensive tasks you can use the interactive and debug clusters through the web portal.

          "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

          The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

          However, there will be impact on the availability of software that is made available via modules.

          Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

          This includes all software installations on top of a compiler toolchain that is older than:

          • GCC(core)/12.3.0
          • foss/2023a
          • intel/2023a
          • gompi/2023a
          • iimpi/2023a
          • gfbf/2023a

          (or another toolchain with a year-based version older than 2023a)

          The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

          foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

          If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

          It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

          "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

          We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

          cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

          Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

          We will keep this page up to date when more specific dates have been planned.

          Warning

          This planning below is subject to change, some clusters may get migrated later than originally planned.

          Please check back regularly.

          "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

          If you have any questions related to the migration to the RHEL 9 operating system, please contact the UAntwerpen-HPC.

          "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

          In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

          When you connect to the UAntwerpen-HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the UAntwerpen-HPC the entire time.

          The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

          "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

          Software installation and maintenance on a UAntwerpen-HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the UAntwerpen-HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

          "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

          The program environment on the UAntwerpen-HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

          All the software packages that are installed on the UAntwerpen-HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

          "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

          In order to administer the active software and their environment variables, the module system has been developed, which:

          1. Activates or deactivates software packages and their dependencies.

          2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

          3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

          4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

          5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

          This is all managed with the module command, which is explained in the next sections.

          There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

          "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

          A large number of software packages are installed on the UAntwerpen-HPC clusters. A list of all currently available software can be obtained by typing:

          module available\n

          It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

          This will give some output such as:

          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          This gives a full list of software packages that can be loaded.

          The casing of module names is important: lowercase and uppercase letters matter in module names.

          "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

          The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

          Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

          E.g., foss/2024a is the first version of the foss toolchain in 2024.

          The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

          "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

          To \"activate\" a software package, you load the corresponding module file using the module load command:

          module load example\n

          This will load the most recent version of example.

          For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

          **However, you should specify a particular version to avoid surprises when newer versions are installed:

          module load secondexample/2.7-intel-2016b\n

          The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

          Modules need not be loaded one by one; the two module load commands can be combined as follows:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

          "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

          Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

          $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          You can also just use the ml command without arguments to list loaded modules.

          It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

          "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

          To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

          $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

          To unload the secondexample module, you can also use ml -secondexample.

          Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

          "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

          In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

          module purge\n

          However, on some VSC clusters you may be left with a very empty list of available modules after executing module purge. On those systems, module av will show you a list of modules containing the name of a cluster or a particular feature of a section of the cluster, and loading the appropriate module will restore the module list applicable to that particular system.

          "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

          Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

          Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

          module load example\n

          rather than

          module load example/1.2.3\n

          Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

          Consider the following example modules:

          $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

          Let's now generate a version conflict with the example module, and see what happens.

          $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

          Note: A module swap command combines the appropriate module unload and module load commands.

          "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

          With the module spider command, you can search for modules:

          $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

          It's also possible to get detailed information about a specific module:

          $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \nThis module can be loaded directly: module load example/1.2.3\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
          "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

          To get a list of all possible commands, type:

          module help\n

          Or to get more information about one specific module package:

          $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
          "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

          If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

          In each module command shown below, you can replace module with ml.

          First, load all modules you want to include in the collections:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          Now store it in a collection using module save. In this example, the collection is named my-collection.

          module save my-collection\n

          Later, for example in a jobscript or a new session, you can load all these modules with module restore:

          module restore my-collection\n

          You can get a list of all your saved collections with the module savelist command:

          $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

          To get a list of all modules a collection will load, you can use the module describe command:

          $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          To remove a collection, remove the corresponding file in $HOME/.lmod.d:

          rm $HOME/.lmod.d/my-collection\n
          "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

          To see how a module would change the environment, you can use the module show command:

          $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

          It's also possible to use the ml show command instead: they are equivalent.

          Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

          You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

          If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

          "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

          To check how much jobs are running in what queues, you can use the qstat -q command:

          $ qstat -q\nQueue            Memory CPU Time Walltime Node  Run Que Lm  State\n---------------- ------ -------- -------- ----  --- --- --  -----\ndefault            --      --       --      --    0   0 --   E R\nq72h               --      --    72:00:00   --    0   0 --   E R\nlong               --      --    72:00:00   --  316  77 --   E R\nshort              --      --    11:59:59   --   21   4 --   E R\nq1h                --      --    01:00:00   --    0   1 --   E R\nq24h               --      --    24:00:00   --    0   0 --   E R\n                                               ----- -----\n                                                337  82\n

          Here, there are 316 jobs running on the long queue, and 77 jobs queued. We can also see that the long queue allows a maximum wall time of 72 hours.

          "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

          You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

          You can also get this information in text form (per cluster separately) with the pbsmon command:

          $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

          pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

          "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

          Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

          As an example, we will run a Perl script, which you will find in the examples subdirectory on the UAntwerpen-HPC. When you received an account to the UAntwerpen-HPC a subdirectory with examples was automatically generated for you.

          Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

          cd\ncp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/\n

          First go to the directory with the first examples by entering the command:

          cd ~/examples/Running-batch-jobs\n

          Each time you want to execute a program on the UAntwerpen-HPC you'll need 2 things:

          The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

          A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The UAntwerpen-HPC needs to know:

          1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

          Later on, the UAntwerpen-HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

          List and check the contents with:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc20167 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc20167 609 Sep 11 10:25 fibo.pl\n

          In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

          1. The Perl script calculates the first 30 Fibonacci numbers.

          2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

          We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

          On the command line, you would run this using:

          $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

          Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the UAntwerpen-HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

          The job script contains a description of the job by specifying the command that need to be executed on the compute node:

          fibo.pbs
          #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

          $ qsub fibo.pbs\n433253.leibniz\n

          The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"433253.leibniz \"); this is a unique identifier for the job and can be used to monitor and manage your job.

          Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

          To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

          Your job is now waiting in the queue for a free workernode to start on.

          Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

          After your job was started, and ended, check the contents of the directory:

          $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc20167 vsc20167   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc20167 vsc20167    0 Feb 28 13:33 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 vsc20167 1010 Feb 28 13:33 fibo.pbs.o433253.leibniz\n-rwxrwxr-x 1 vsc20167 vsc20167  302 Feb 28 13:32 fibo.pl\n

          Explore the contents of the 2 new files:

          $ more fibo.pbs.o433253.leibniz\n$ more fibo.pbs.e433253.leibniz\n

          These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('433253.leibniz' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

          "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

          It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

          To submit jobs to the {{ othercluster }} cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/{{ othercluster }} instead of using module swap cluster/{{ othercluster }}. The last command also activates the software modules that are installed specifically for {{ othercluster }}, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the {{ othercluster }} cluster. The same approach can be used to submit jobs to another cluster, of course.

          Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the {{ defaultcluster }} cluster, loading the cluster/{{ defaultcluster }} module corresponds to loading 3 different env/ modules:

          env/ module for {{ defaultcluster }} Purpose env/slurm/{{ defaultcluster }} Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/{{ defaultcluster }} Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/{{ defaultcluster }} Changes the set of $VSC_ environment variables that are specific to the {{ defaultcluster }} cluster

          We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

          We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

          "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

          Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

          qstat 12345\n

          To show an estimated start time for your job (note that this may be very inaccurate, the margin of error on this figure can be bigger then 100% due to a sample in a population of 1.) This command is not available on all systems.

          ::: prompt :::

          This is only a very rough estimate. Jobs may launch sooner than estimated if other jobs end faster than estimated, but may also be delayed if other higher-priority jobs enter the system.

          To show the status, but also the resources required by the job, with error messages that may prevent your job from starting:

          ::: prompt :::

          To show on which compute nodes your job is running, at least, when it is running:

          qstat -n 12345\n

          To remove a job from the queue so that it will not run, or to stop a job that is already running.

          qdel 12345\n

          When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

          $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n433253.leibniz ....     mpi  vsc20167     0    Q short\n

          Here:

          Job ID the job's unique identifier

          Name the name of the job

          User the user that owns the job

          Time Use the elapsed walltime for the job

          Queue the queue the job is in

          The state S can be any of the following:

          State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

          User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

          "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

          As we learned above, Moab is the software application that actually decides when to run your job and what resources your job will run on.

          You can look at the queue by using the PBS command or the Moab command. By default, will display the queue ordered by , whereas will display jobs grouped by their state (\"running\", \"idle\", or \"hold\") then ordered by priority. Therefore, is often more useful. Note however that at some VSC-sites, these commands show only your jobs or may be even disabled to not reveal what other users are doing.

          The command displays information about active (\"running\"), eligible (\"idle\"), blocked (\"hold\"), and/or recently completed jobs. To get a summary:

          ::: prompt active jobs: 163 eligible jobs: 133 blocked jobs: 243 Total jobs: 539 :::

          And to get the full detail of all the jobs, which are in the system:

          ::: prompt active jobs------------------------ JOBID USERNAME STATE PROCS REMAINING STARTTIME 428024 vsc20167 Running 8 2:57:32 Mon Sep 2 14:55:05 153 active jobs 1307 of 3360 processors in use by local jobs (38.90 153 of 168 nodes active (91.07

          eligible jobs---------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 442604 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:39:13 442605 vsc20167 Idle 48 7:00:00:00 Sun Sep 22 16:46:22

          135 eligible jobs

          blocked jobs----------------------- JOBID USERNAME STATE PROCS WCLIMIT QUEUETIME 441237 vsc20167 Idle 8 3:00:00:00 Thu Sep 19 15:53:10 442536 vsc20167 UserHold 40 3:00:00:00 Sun Sep 22 00:14:22 252 blocked jobs Total jobs: 540 :::

          There are 3 categories, the , and jobs.

          Active jobs

          are jobs that are running or starting and that consume computer resources. The amount of time remaining (w.r.t.\u00a0walltime, sorted to earliest completion time) and the start time are displayed. This will give you an idea about the foreseen completion time. These jobs could be in a number of states:

          Started

          attempting to start, performing pre-start tasks

          Running

          currently executing the user application

          Suspended

          has been suspended by scheduler or admin (still in place on the allocated resources, not executing)

          Cancelling

          has been cancelled, in process of cleaning up

          Eligible jobs

          are jobs that are waiting in the queues and are considered eligible for both scheduling and backfilling. They are all in the idle job state and do not violate any fairness policies or do not have any job holds in place. The requested walltime is displayed, and the list is ordered by job priority.

          Blocked jobs

          are jobs that are ineligible to be run or queued. These jobs could be in a number of states for the following reasons:

          Idle

          when the job violates a fairness policy

          Userhold

          or systemhold when it is user or administrative hold

          Batchhold

          when the requested resources are not available or the resource manager has repeatedly failed to start the job

          Deferred

          when a temporary hold when the job has been unable to start after a specified number of attempts

          Notqueued

          when scheduling daemon is unavailable

          "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

          Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

          It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

          "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

          The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

          qsub -l walltime=2:30:00 ...\n

          For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

          If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

          qsub -l mem=4gb ...\n

          The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

          qsub -l nodes=5:ppn=2 ...\n

          The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

          qsub -l nodes=1:westmere\n

          The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

          These options can either be specified on the command line, e.g.

          qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

          or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          Note that the resources requested on the command line will override those specified in the PBS file.

          "}, {"location": "running_batch_jobs/#node-specific-properties", "title": "Node-specific properties", "text": "

          The following table contains some node-specific properties that can be used to make sure the job will run on nodes with a specific CPU or interconnect. Note that these properties may vary over the different VSC sites.

          ivybridge only use Intel processors from the Ivy Bridge family (26xx-v2, hopper-only) broadwell only use Intel processors from the Broadwell family (26xx-v4, leibniz-only) mem128 only use nodes with 128GB of RAM (leibniz) mem256 only use nodes with 256GB of RAM (hopper and leibniz) tesla, gpu only use nodes with the NVIDUA P100 GPU (leibniz)

          Since both hopper and leibniz are homogeneous with respect to processor architecture, the CPU architecture properties are not really needed and only defined for compatibility with other VSC clusters.

          shanghai only use AMD Shanghai processors (AMD 2378) magnycours only use AMD Magnycours processors (AMD 6134) interlagos only use AMD Interlagos processors (AMD 6272) barcelona only use AMD Shanghai and Magnycours processors amd only use AMD processors ivybridge only use Intel Ivy Bridge processors (E5-2680-v2) intel only use Intel processors gpgpu only use nodes with General Purpose GPUs (GPGPUs) k20x only use nodes with NVIDIA Tesla K20x GPGPUs xeonphi only use nodes with Xeon Phi co-processors phi5110p only use nodes with Xeon Phi 5110P co-processors

          To get a list of all properties defined for all nodes, enter

          ::: prompt :::

          This list will also contain properties referring to, e.g., network components, rack number, etc.

          "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

          At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

          When you navigate to that directory and list its contents, you should see them:

          $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc20167  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc20167   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc20167   52 Sep 11 11:03 fibo.pbs.e433253.leibniz\n-rw------- 1 vsc20167 1307 Sep 11 11:03 fibo.pbs.o433253.leibniz\n

          In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

          Inspect the generated output and error files:

          $ cat fibo.pbs.o433253.leibniz\n...\n$ cat fibo.pbs.e433253.leibniz\n...\n
          "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#upon-job-failure", "title": "Upon job failure", "text": "

          Whenever a job fails, an e-mail will be sent to the e-mail address that's connected to your VSC account. This is the e-mail address that is linked to the university account, which was used during the registration process.

          You can force a job to fail by specifying an unrealistic wall-time for the previous example. Lets give the \"fibo.pbs\" job just one second to complete:

          ::: prompt :::

          Now, lets hope that the did not manage to run the job within one second, and you will get an e-mail informing you about this error.

          ::: flattext PBS Job Id: Job Name: fibo.pbs Exec host: Aborted by PBS Server Job exceeded some resource limit (walltime, mem, etc.). Job was aborted. See Administrator for help :::

          "}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

          You can instruct the UAntwerpen-HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

          #PBS -m b \n#PBS -m e \n#PBS -m a\n

          or

          #PBS -m abe\n

          These options can also be specified on the command line. Try it and see what happens:

          qsub -m abe fibo.pbs\n

          The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

          qsub -m b -M john.smith@example.com fibo.pbs\n

          will send an e-mail to john.smith@example.com when the job begins.

          "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

          If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

          So the following example might go wrong:

          $ qsub job1.sh\n$ qsub job2.sh\n

          You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

          $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

          afterok means \"After OK\", or in other words, after the first job successfully completed.

          It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

          1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

          "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

          Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

          Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

          Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the UAntwerpen-HPC. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

          The syntax for qsub for submitting an interactive PBS job is:

          $ qsub -I <... pbs directives ...>\n
          "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

          Tip

          Find the code in \"~/examples/Running_interactive_jobs\"

          First of all, in order to know on which computer you're working, enter:

          $ hostname -f\nln2.leibniz.uantwerpen.vsc\n

          This means that you're now working on the login node ln2.leibniz.uantwerpen.vsc of the cluster.

          The most basic way to start an interactive job is the following:

          $ qsub -I\nqsub: waiting for job 433253.leibniz to start\nqsub: job 433253.leibniz ready\n

          There are two things of note here.

          1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

          2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

          In order to know on which compute-node you're working, enter again:

          $ hostname -f\nr1c02cn3.leibniz.antwerpen.vsc\n

          Note that we are now working on the compute-node called \"r1c02cn3.leibniz.antwerpen.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

          This computer name looks strange, but bears some logic in it. It provides the system administrators with information where to find the computer in the computer room.

          The computer \"r1c02cn3\" stands for:

          1. \"r5\" is rack #5.

          2. \"c3\" is enclosure/chassis #3.

          3. \"cn08\" is compute node #08.

          With this naming convention, the system administrator can easily find the physical computers when they need to execute some maintenance activities.

          Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

          $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

          You can exit the interactive session with:

          $ exit\n

          Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

          You can work for 3 hours by:

          qsub -I -l walltime=03:00:00\n

          If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

          "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

          To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

          The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

          Download the latest version of the XQuartz package on: http://xquartz.macosforge.org/landing/ and install the XQuartz.pkg package.

          The installer will take you through the installation procedure, just continue clicking Continue on the various screens that will pop-up until your installation was successful.

          A reboot is required before XQuartz will correctly open graphical applications.

          "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

          We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

          Now run the message program:

          cd ~/examples/Running_interactive_jobs\n./message.py\n

          You should see the following message appearing.

          Click any button and see what happens.

          -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
          "}, {"location": "running_interactive_jobs/#run-your-interactive-application", "title": "Run your interactive application", "text": "

          In this last example, we will show you that you can just work on this compute node, just as if you were working locally on your desktop. We will run the Fibonacci example of the previous chapter again, but now in full interactive mode in MATLAB.

          ::: prompt :::

          And start the MATLAB interactive environment:

          ::: prompt :::

          And start the fibo2.m program in the command window:

          ::: prompt fx >> :::

          ::: center :::

          And see the displayed calculations, ...

          ::: center :::

          as well as the nice \"plot\" appearing:

          ::: center :::

          You can work in this MATLAB GUI, and finally terminate the application by entering \"\" in the command window again.

          ::: prompt fx >> :::

          "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

          You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

          "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

          First go to the directory:

          cd ~/examples/Running_jobs_with_input_output_data\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          ```

          cp -r /apps/antwerpen/tutorials/Intro-HPC/examples ~/ ```

          List and check the contents with:

          $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc20167   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc20167   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc20167   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc20167  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167  2393 Sep 13 10:40 file3.py\n

          Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

          file1.py
          #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

          The code of the Python script, is self explanatory:

          1. In step 1, we write something to the file hello.txt in the current directory.

          2. In step 2, we write some text to stdout.

          3. In step 3, we write to stderr.

          Check the contents of the first job script:

          file1a.pbs
          #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

          You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

          Submit it:

          qsub file1a.pbs\n

          After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

          $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc20167   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc20167  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc20167  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc20167   91 Sep 13 13:13 file1a.pbs.e433253.leibniz\n-rw------- 1 vsc20167  105 Sep 13 13:13 file1a.pbs.o433253.leibniz\n-rw-rw-r-- 1 vsc20167  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc20167  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc20167 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc20167 2393 Sep 13 10:40 file3.py*\n

          Some observations:

          1. The file Hello.txt was created in the current directory.

          2. The file file1a.pbs.o433253.leibniz contains all the text that was written to the standard output stream (\"stdout\").

          3. The file file1a.pbs.e433253.leibniz contains all the text that was written to the standard error stream (\"stderr\").

          Inspect their contents ...\u00a0and remove the files

          $ cat Hello.txt\n$ cat file1a.pbs.o433253.leibniz\n$ cat file1a.pbs.e433253.leibniz\n$ rm Hello.txt file1a.pbs.o433253.leibniz file1a.pbs.e433253.leibniz\n

          Tip

          Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

          "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

          Check the contents of the job script and execute it.

          file1b.pbs
          #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

          Inspect the contents again ...\u00a0and remove the generated files:

          $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e433253.leibniz\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o433253.leibniz\n$ rm Hello.txt my_serial_job.*\n

          Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

          "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

          You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

          file1c.pbs
          #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
          "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

          The UAntwerpen-HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

          Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

          The following locations are available:

          Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Antwerpen/xxx/vsc20167. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Antwerpen/xxx/vsc20167. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space.

          Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

          We elaborate more on the specific function of these locations in the following sections.

          "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

          Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

          The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

          The operating system also creates a few files and folders here to manage your account. Examples are:

          File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again.

          Furthermore, we have initially created some files/directories there (tutorial, docs, examples, examples.pbs) that accompany this manual and allow you to easily execute the provided examples.

          "}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

          In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

          The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

          "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

          To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

          You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

          Each type of scratch has its own use:

          Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

          Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

          At the time of writing, the cluster scratch space is\nshared between both clusters at the University of Antwerp. This may change again in the future when storage gets updated.\n

          Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

          Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

          Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

          The amount of data (called \"Block Limits\") that is currently in use by the user (\"KB\"), the soft limits (\"quota\") and the hard limits (\"limit\") for all 3 file-systems are always displayed when a user connects to the .

          With regards to the file limits, the number of files in use (\"files\"), its soft limit (\"quota\") and its hard limit (\"limit\") for the 3 file-systems are also displayed.

          ::: prompt ---------------------------------------------------------- Your quota is:

          Block Limits Filesystem KB quota limit grace home 177920 3145728 3461120 none data 17707776 26214400 28835840 none scratch 371520 26214400 28835840 none

          File Limits Filesystem files quota limit grace home 671 20000 25000 none data 103079 100000 150000 expired scratch 2214 100000 150000 none

          :::

          Make sure to regularly check these numbers at log-in!

          The rules are:

          1. You will only receive a warning when you have reached the soft limit of either quota.

          2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

          3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

          We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

          "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

          Tip

          Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

          In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

          1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

          2. repeat this action 30.000 times;

          3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the UAntwerpen-HPC.

          $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

          Tip

          Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

          In this exercise, you will

          1. Generate the file \"primes_1.txt\" again as in the previous exercise;

          2. open the file;

          3. read it line by line;

          4. calculate the average of primes in the line;

          5. count the number of primes found per line;

          6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job:

          $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

          The available disk space on the UAntwerpen-HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

          • the amount of disk space; and

          • the number of files

          that can be made available to each individual UAntwerpen-HPC user.

          The quota of disk space and number of files for each UAntwerpen-HPC user is:

          HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

          Tip

          The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

          Tip

          "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

          The \"show_quota\" command has been developed to show you the status of your quota in a readable format:

          $ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          or on the UAntwerp clusters

          $ module load scripts\n$ show_quota\nVSC_DATA:    used 81MB (0%)  quota 25600MB\nVSC_HOME:    used 33MB (1%)  quota 3072MB\nVSC_SCRATCH:   used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_GLOBAL: used 28MB (0%)  quota 25600MB\nVSC_SCRATCH_SITE:   used 28MB (0%)  quota 25600MB\n

          With this command, you can follow up the consumption of your total disk quota easily, as it is expressed in percentages. Depending of on which cluster you are running the script, it may not be able to show the quota on all your folders. E.g., when running on the tier-1 system Muk, the script will not be able to show the quota on $VSC_HOME or $VSC_DATA if your account is a KU\u00a0Leuven, UAntwerpen or VUB account.

          Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

          $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

          This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

          If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

          $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

          If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

          $ du -s\n5632 .\n$ du -s -h\n

          If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

          $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

          Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

          $ du -h --max-depth 1 $VSC_HOME\n22M /user/antwerpen/201/vsc20167/dataset01\n36M /user/antwerpen/201/vsc20167/dataset02\n22M /user/antwerpen/201/vsc20167/dataset03\n3.5M /user/antwerpen/201/vsc20167/primes.txt\n24M /user/antwerpen/201/vsc20167/.cache\n

          We also want to mention the tree command, as it also provides an easy manner to see which files consumed your available quotas. Tree is a recursive directory-listing program that produces a depth indented listing of files.

          Try:

          $ tree -s -d\n

          However, we urge you to only use the du and tree commands when you really need them as they can put a heavy strain on the file system and thus slow down file operations on the cluster for all other users.

          "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

          Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

          Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

          To change the group of a directory and it's underlying directories and files, you can use:

          chgrp -R groupname directory\n
          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
          1. Get the group name you want to belong to.

          2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
          1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

          2. Fill out the group name. This cannot contain spaces.

          3. Put a description of your group in the \"Info\" field.

          4. You will now be a member and moderator of your newly created group.

          "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

          Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

          "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

          You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

          $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

          We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

          "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

          A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

          "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

          This section will explain how to create, activate, use and deactivate Python virtual environments.

          "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

          A Python virtual environment can be created with the following command:

          python -m venv myenv      # Create a new virtual environment named 'myenv'\n

          This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

          Warning

          When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

          "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

          To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

          source myenv/bin/activate                    # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

          After activating the virtual environment, you can install additional Python packages with pip install:

          pip install example_package1\npip install example_package2\n

          These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

          It is now possible to run Python scripts that use the installed packages in the virtual environment.

          Tip

          When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

          Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

          To check if a package is available as a module, use:

          module av package_name\n

          Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

          module show module_name\n

          to check which extensions are included in a module (if any).

          "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

          Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

          example.py
          import example_package1\nimport example_package2\n...\n
          python example.py\n
          "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

          When you are done using the virtual environment, you can deactivate it. To do that, run:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

          You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

          pytorch_poutyne.py
          import torch\nimport poutyne\n\n...\n

          We load a PyTorch package as a module and install Poutyne in a virtual environment:

          module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

          While the virtual environment is activated, we can run the script without any issues:

          python pytorch_poutyne.py\n

          Deactivate the virtual environment when you are done:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

          To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

          module swap cluster/donphan\nqsub -I\n

          After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

          Naming a virtual environment

          When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

          python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
          "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

          This section will combine the concepts discussed in the previous sections to:

          1. Create a virtual environment on a specific cluster.
          2. Combine packages installed in the virtual environment with modules.
          3. Submit a job script that uses the virtual environment.

          The example script that we will run is the following:

          pytorch_poutyne.py
          import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

          First, we create a virtual environment on the donphan cluster:

          module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

          Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

          jobscript.pbs
          #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

          Next, we submit the job script:

          qsub jobscript.pbs\n

          Two files will be created in the directory where the job was submitted: python_job_example.o433253.leibniz and python_job_example.e{{ job_id }}, where 433253.leibniz is the id of your job. The .o file contains the output of the job.

          "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

          Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

          For example, if we create a virtual environment on the skitty cluster,

          module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

          return to the login node by pressing CTRL+D and try to use the virtual environment:

          $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

          we are presented with the illegal instruction error. More info on this here

          "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

          When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

          python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

          Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

          "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

          There are two main reasons why this error could occur.

          1. You have not loaded the Python module that was used to create the virtual environment.
          2. You loaded or unloaded modules while the virtual environment was activated.
          "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

          If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

          The following commands illustrate this issue:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

          module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

          You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          The solution is to only modify modules when not in a virtual environment.

          "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

          Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

          This documentation only covers aspects of using Singularity on the infrastructure.

          "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

          The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via .

          "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

          Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

          When you make Singularity or convert Docker images you have some restrictions:

          • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
          "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          ::: prompt :::

          Create a job script like:

          Create an example myscript.sh:

          ::: code bash #!/bin/bash

          # prime factors factor 1234567 :::

          "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          ::: prompt :::

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before singularity execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          ::: prompt :::

          For example to compile an MPI example:

          ::: prompt :::

          Example MPI job script:

          "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

          The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

          As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

          In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

          In order to prepare things, make a teaching request by contacting the UAntwerpen-HPC with the following information (explained further below):

          • Title and nickname
          • Start and end date for your course or training
          • VSC-ids of all teachers/trainers
          • Participants based on UGent Course Code and/or list of VSC-ids
          • Optional information
            • Additional storage requirements
              • Shared folder
              • Groups folder for collaboration
              • Quota
            • Reservation for resource requirements beyond the interactive cluster
            • Ticket number for specific software needed for your course/training
            • Details for a custom Interactive Application in the webportal

          In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

          Please make these requests well in advance, several weeks before the start of your course/workshop.

          "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

          The title of the course or training can be used in e.g. reporting.

          The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

          When choosing the nickname, try to make it unique, but this is not enforced nor checked.

          "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

          The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

          The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

          • Course group and subgroups will be deactivated
          • Residual data in the course directories will be archived or deleted
          • Custom Interactive Applications will be disabled
          "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

          A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

          This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

          Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

          "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

          The management of the list of students or participants depends if this is a UGent course or a training/workshop.

          "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

          Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

          The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

          Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

          A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

          "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

          (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

          "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

          For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

          This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

          Every course directory will always contain the folders:

          • input
            • ideally suited to distribute input data such as common datasets
            • moderators have read/write access
            • group members (students) only have read access
          • members
            • this directory contains a personal folder for every student in your course members/vsc<01234>
            • only this specific VSC-id will have read/write access to this folder
            • moderators have read access to this folder
          "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

          Optionally, we can also create these folders:

          • shared
            • this is a folder for sharing files between any and all group members
            • all group members and moderators have read/write access
            • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
          • groups
            • a number of groups/group_<01> folders are created under the groups folder
            • these folders are suitable if you want to let your students collaborate closely in smaller groups
            • each of these group_<01> folders are owned by a dedicated group
            • teachers are automatically made moderators of these dedicated groups
            • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
            • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

          If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

          • shared: yes
          • subgroups: <number of (sub)groups>
          "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

          There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

          • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
          • member quota (defaults 5 GB volume and 10k files) are per student/participant

          The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

          "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

          The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

          "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

          We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

          Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

          Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

          Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

          "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

          In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

          We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

          Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

          "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

          HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

          A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

          If you would like this for your course, provide more details in your teaching request, including:

          • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

          • which cluster you want to use

          • how many nodes/cores/GPUs are needed

          • which software modules you are loading

          • custom code you are launching (e.g. autostart a GUI)

          • required environment variables that you are setting

          • ...

          We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

          A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

          "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

          Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the UAntwerpen-HPC no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

          "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

          Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

          "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

          Jobcli is a Python library that was developed by UAntwerpen-HPC to make it possible for the UAntwerpen-HPC to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

          "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

          Adding --help to a Torque command when using it on the UAntwerpen-HPC will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

          For example:

          $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

          "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

          Adding --dryrun to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

          Similarly to --dryrun, adding --debug to a Torque command when using it on the UAntwerpen-HPC will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

          The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

          example.sh:

          #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

          Running the following command:

          $ qsub --dryrun example.sh -N example\n

          will generate this output:

          Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc20167/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

          With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the UAntwerpen-HPC that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

          Similarly to the --dryrun example, we start by running the following command:

          $ qsub --debug example.sh -N example\n

          which generates this output:

          DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc20167\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc20167/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc20167/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
          The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

          "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

          Below is a list of the most common and useful directives.

          Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

          TORQUE-related environment variables in batch job scripts.

          # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

          IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

          When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

          Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

          Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

          "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

          When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

          To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

          Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

          Other reasons why using more cores may not lead to a (significant) speedup include:

          • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

          • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

          • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

          • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

          • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

          • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

          More info on running multi-core workloads on the UAntwerpen-HPC can be found here.

          "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

          When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

          Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

          Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

          Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

          An example of how you can make beneficial use of multiple nodes can be found here.

          You can also use MPI in Python, some useful packages that are also available on the HPC are:

          • mpi4py
          • Boost.MPI

          We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

          "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

          If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

          If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

          "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

          If you get from your job output an error message similar to this:

          =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

          This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

          "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

          Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

          "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

          If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

          If you have errors that look like:

          vsc20167@login.hpc.uantwerpen.be: Permission denied\n

          or you are experiencing problems with connecting, here is a list of things to do that should help:

          1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

          2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

          3. Please double/triple check your VSC login ID. It should look something like vsc20167: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

          4. You previously connected to the UAntwerpen-HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

          5. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

          6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

          7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

          If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@uantwerpen.be and include the following information:

          Please add -vvv as a flag to ssh like:

          ssh -vvv vsc20167@login.hpc.uantwerpen.be\n

          and include the output of that command in the message.

          "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

          If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

          You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

          If the fingerprint matches, click \"Yes\".

          If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

          If you get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          or

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

          It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

          "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

          If it does, type yes. If it doesn't, please contact support: hpc@uantwerpen.be.

          The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

          Make sure the fingerprint in the alert matches one of the following:

          - ssh-rsa 2048 5a:40:9d:2a:f4:b7:6c:87:0d:87:30:07:9d:ea:80:11\n- sha-rsa 2048 SHA256:W8HRHTZpPd2GIhoNU2xj2oUKhcFr2bjIzZsKzY+1PCA\n- ssh-ecdsa 256 f9:4f:19:9b:fb:40:c5:6c:6f:b9:64:2e:33:0a:8d:26\n- ssh-ecdsa 256 SHA256:DlsP+jFrTqSdr9VquUpDj17Uy99zFdFN/LxVhaQQzbo\n- ssh-ed25519 255 df:0c:61:b9:26:51:0f:b4:ca:43:ac:f6:ee:d2:a1:29\n- ssh-ed25519 255 SHA256:+vlrkJui34B4iumxGVHd447K3W8wzgE1n1h2/Ic0WlE\n

          If it does, press Yes, if it doesn't, please contact hpc@uantwerpen.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

          To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

          Note

          Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

          "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

          If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

          Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

          You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

          "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

          See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

          "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

          All the UAntwerpen-HPC clusters run some variant of the \"RedHat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

          vsc20167@ln01[203] $\n

          When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

          Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen joe Text editor

          Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

          $ echo This is a test\nThis is a test\n

          Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

          More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

          $ ls --help \n$ man ls\n$ info ls\n

          (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

          "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

          In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

          Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

          echo \"Hello! This is my hostname:\" \nhostname\n

          You can type both lines at your shell prompt, and the result will be the following:

          $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\nln2.leibniz.uantwerpen.vsc\n

          Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

          $ vi foo\n

          or use the following commands:

          echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

          The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

          $ bash foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          Congratulations, you just created and started your first shell script!

          A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

          You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

          $ which bash\n/bin/bash\n

          We edit our script and change it with this information:

          #!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

          Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

          Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

          chmod +x foo\n

          Now you can start your script by simply executing it:

          $ ./foo\nHello! This is my hostname:\nln2.leibniz.uantwerpen.vsc\n

          The same technique can be used for all other scripting languages, like Perl and Python.

          Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

          "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

          The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

          Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

          To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

          Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

          Through this web portal, you can:

          • browse through the files & directories in your VSC account, and inspect, manage or change them;

          • consult active jobs (across all HPC-UGent Tier-2 clusters);

          • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

          • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

          • open a terminal session directly in your web browser;

          More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

          "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

          All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

          "}, {"location": "web_portal/#login", "title": "Login", "text": "

          When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

          "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

          The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

          Please click \"Authorize\" here.

          This request will only be made once, you should not see this again afterwards.

          "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

          Once logged in, you should see this start page:

          This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

          If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

          "}, {"location": "web_portal/#features", "title": "Features", "text": "

          We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

          "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

          Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

          The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

          Here you can:

          • Click a directory in the tree view on the left to open it;

          • Use the buttons on the top to:

            • go to a specific subdirectory by typing in the path (via Go To...);

            • open the current directory in a terminal (shell) session (via Open in Terminal);

            • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

            • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

            • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

            • show the owner and permissions in the file listing (via Show Owner/Mode);

          • Double-click a directory in the file listing to open that directory;

          • Select one or more files and/or directories in the file listing, and:

            • use the View button to see the contents (use the button at the top right to close the resulting popup window);

            • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

            • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

            • use the Download button to download the selected files and directories from your VSC account to your local workstation;

            • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

            • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

            • use the Delete button to (permanently!) remove the selected files and directories;

          For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

          "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

          Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

          For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

          "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

          To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

          A new browser tab will be opened that shows all your current queued and/or running jobs:

          You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

          Jobs that are still queued or running can be deleted using the red button on the right.

          Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

          For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

          "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

          To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

          This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

          You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

          Don't forget to actually submit your job to the system via the green Submit button!

          "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

          In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

          "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

          Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

          Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

          To exit the shell session, type exit followed by Enter and then close the browser tab.

          Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

          "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

          To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

          You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

          Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

          To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

          "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

          See dedicated page on Jupyter notebooks

          "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

          In case of problems with the web portal, it could help to restart the web server running in your VSC account.

          You can do this via the Restart Web Server button under the Help menu item:

          Of course, this only affects your own web portal session (not those of others).

          "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
          • ABAQUS for CAE course
          "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

          X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

          1. A graphical remote desktop that works well over low bandwidth connections.

          2. Copy/paste support from client to server and vice-versa.

          3. File sharing from client to server.

          4. Support for sound.

          5. Printer sharing from client to server.

          6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

          "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

          X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

          X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

          "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

          After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

          There are two ways to connect to the login node:

          • Option A: A direct connection to \"login.hpc.uantwerpen.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

          • Option B: You can use the node \"login.hpc.uantwerpen.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

          "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

          This is the easier way to setup X2Go, a direct connection to the login node.

          1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

          2. Set the login hostname (In our case: \"login.hpc.uantwerpen.be\")

          3. Set the Login name. In the example is \"vsc20167\" but you must change it by your current VSC account.

          4. Set the SSH port (22 by default).

          5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

            1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

            2. You should look for your private SSH key generated in Generating a public/private key pair. This file has been stored in the directory \"~/.ssh/\" (by default \"id_rsa\"). \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing cmd+shift+g , which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"/.ssh\" and press enter. Choose that file and click on open .

          6. Check \"Try autologin\" option.

          7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

            1. [optional]: Set a single application like Terminal instead of XFCE desktop.

          8. [optional]: Change the session icon.

          9. Click the OK button after these changes.

          "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

          This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

          1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

          2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"ln2.leibniz.uantwerpen.vsc\")

          3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

            1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

            2. Set Host to \"login.hpc.uantwerpen.be\" within \"Proxy Server\" section as well.

            3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

            4. Click the OK button after these changes.

          "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

          Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

          X2Go will keep the session open for you (but only if the login node is not rebooted).

          "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

          If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

          hostname\n

          This will give you the full login name (like \"ln2.leibniz.uantwerpen.vsc\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

          "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

          If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

          "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

          The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

          To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

          Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

          After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

          Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

          "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

          TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

          Loads MNIST datasets and trains a neural network to recognize hand-written digits.

          Runtime: ~1 min. on 8 cores (Intel Skylake)

          See https://www.tensorflow.org/tutorials/quickstart/beginner

          "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

          Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

          These skills are important to the UAntwerpen-HPC, which operates on RedHat Enterprise Linux. For more information see introduction to HPC.

          The guide aims to make you familiar with the Linux command line environment quickly.

          The tutorial goes through the following steps:

          1. Getting Started
          2. Navigating
          3. Manipulating files and directories
          4. Uploading files
          5. Beyond the basics

          Do not forget Common pitfalls, as this can save you some troubleshooting.

          "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
          • More on the HPC infrastructure.
          • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
          "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

          Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

          "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

          To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

          First, it's important to make a distinction between two different output channels:

          1. stdout: standard output channel, for regular output

          2. stderr: standard error channel, for errors and warnings

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

          > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

          $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

          >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

          $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

          < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

          One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

          $ find . -name .txt > files\n$ xargs grep banana < files\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

          To redirect the stderr output (warnings, messages), you can use 2>, just like >

          $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

          To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

          $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

          Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

          $ ls | wc -l\n    42\n

          A common pattern is to pipe the output of a command to less so you can examine or search the output:

          $ find . | less\n

          Or to look through your command history:

          $ history | less\n

          You can put multiple pipes in the same line. For example, which cp commands have we run?

          $ history | grep cp | less\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

          The shell will expand certain things, including:

          1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

          2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

          3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

          4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

          "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

          ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

          $ ps -fu $USER\n

          To see all the processes:

          $ ps -elf\n

          To see all the processes in a forest view, use:

          $ ps auxf\n

          The last two will spit out a lot of data, so get in the habit of piping it to less.

          pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

          pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

          "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

          ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

          $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

          Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

          $ kill -9 1234\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

          top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

          To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

          There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

          To exit top, use q (for 'quit').

          For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

          "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

          ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

          $ ulimit -a\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

          To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

          $ wc example.txt\n      90     468     3189   example.txt\n

          The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

          To only count the number of lines, use wc -l:

          $ wc -l example.txt\n      90    example.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

          grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

          $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

          grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

          "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

          cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

          $ cut -f 1 -d ',' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

          sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

          $ sed 's/oldtext/newtext/g' myfile.txt\n

          By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

          "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

          awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

          First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

          $ awk '{print $4}' mydata.dat\n

          You can use -F ':' to change the delimiter (F for field separator).

          The next example is used to sum numbers from a field:

          $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

          The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

          However, there are some rules you need to abide by.

          Here is a very detailed guide should you need more information.

          "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

          The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

          #!/bin/sh\n
          #!/bin/bash\n
          #!/usr/bin/env bash\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

          Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

          if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
          Or only if a certain variable is bigger than one:
          if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
          Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

          In the initial example, we used -d to test if a directory existed. There are several more checks.

          Another useful example, is to test if a variable contains a value (so it's not empty):

          if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

          the -z will check if the length of the variable's value is greater than zero.

          "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

          Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

          Let's look at a simple example:

          for i in 1 2 3\ndo\necho $i\ndone\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

          Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

          CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

          In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

          "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

          Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

          Firstly a useful thing to know for debugging and testing is that you can run any command like this:

          command 2>&1 output.log   # one single output file, both output and errors\n

          If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

          If you want regular and error output separated you can use:

          command > output.log 2> output.err  # errors in a separate file\n

          this will write regular output to output.log and error output to output.err.

          You can then look for the errors with less or search for specific text with grep.

          In scripts, you can use:

          set -e\n

          This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

          "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

          Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

          command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

          If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

          Examples include:

          • modifying your $PS1 (to tweak your shell prompt)

          • printing information about the current/jobs environment (echoing environment variables, etc.)

          • selecting a specific cluster to run on with module swap cluster/...

          Some recommendations:

          • Avoid using module load statements in your $HOME/.bashrc file

          • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

          When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

          "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
          "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

          The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

          This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

          #PBS -l nodes=1:ppn=1 # single-core\n

          For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

          #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

          We intend to submit it on the long queue:

          #PBS -q long\n

          We request a total running time of 48 hours (2 days).

          #PBS -l walltime=48:00:00\n

          We specify a desired name of our job:

          #PBS -N FreeSurfer_per_subject-time-longitudinal\n
          This specifies mail options:
          #PBS -m abe\n

          1. a means mail is sent when the job is aborted.

          2. b means mail is sent when the job begins.

          3. e means mail is sent when the job ends.

          Joins error output with regular output:

          #PBS -j oe\n

          All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
          1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

          2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

          3. How many files and directories are in /tmp?

          4. What's the name of the 5th file/directory in alphabetical order in /tmp?

          5. List all files that start with t in /tmp.

          6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

          7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

          "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

          This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

          "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

          If you receive an error message which contains something like the following:

          No such file or directory\n

          It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

          Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

          "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

          Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

          $ cat some file\nNo such file or directory 'some'\n

          Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

          $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

          This is especially error-prone if you are piping results of find:

          $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

          This can be worked around using the -print0 flag:

          $ find . -type f -print0 | xargs -0 cat\n...\n

          But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

          "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

          If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

          $ rm -r ~/$PROJETC/*\n

          "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

          A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

          $ #rm -r ~/$POROJETC/*\n
          Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

          "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
          $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

          Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

          $ chmod +x script_name.sh\n

          "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

          If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

          If you need help about a certain command, you should consult its so-called \"man page\":

          $ man command\n

          This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

          Don't be afraid to contact hpc@uantwerpen.be. They are here to help and will do so for even the smallest of problems!

          "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
          1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

          2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

          3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

          4. basic shell usage

          5. Bash for beginners

          6. MOOC

          Please don't hesitate to contact in case of questions or problems.

          "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

          To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

          You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

          Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

          "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

          To get help:

          1. use the documentation available on the system, through the help, info and man commands (use q to exit).
            help cd \ninfo ls \nman cp \n
          2. use Google

          3. contact hpc@uantwerpen.be in case of problems or questions (even for basic things!)

          "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

          Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@uantwerpen.be

          "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

          The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

          You use the shell by executing commands, and hitting <enter>. For example:

          $ echo hello \nhello \n

          You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

          To go through previous commands, use <up> and <down>, rather than retyping them.

          "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

          A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

          $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

          "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

          If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

          "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

          At the prompt we also have access to shell variables, which have both a name and a value.

          They can be thought of as placeholders for things we need to remember.

          For example, to print the path to your home directory, we can use the shell variable named HOME:

          $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

          This prints the value of this variable.

          "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

          There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

          For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

          $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

          You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

          $ env | sort | grep VSC\n

          But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

          $ export MYVARIABLE=\"value\"\n

          It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

          If we then do

          $ echo $MYVARIABLE\n

          this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

          "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

          You can change what your prompt looks like by redefining the special-purpose variable $PS1.

          For example: to include the current location in your prompt:

          $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

          Note that ~ is short representation of your home directory.

          To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

          $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

          "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

          One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

          This may lead to surprising results, for example:

          $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

          To understand what's going on here, see the section on cd below.

          The moral here is: be very careful to not use empty variables unintentionally.

          Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

          The -e option will result in the script getting stopped if any command fails.

          The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

          More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

          "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

          If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

          "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

          Basic information about the system you are logged into can be obtained in a variety of ways.

          We limit ourselves to determining the hostname:

          $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

          And querying some basic information about the Linux kernel:

          $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

          "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
          • Print the full path to your home directory
          • Determine the name of the environment variable to your personal scratch directory
          • What's the name of the system you\\'re logged into? Is it the same for everyone?
          • Figure out how to print the value of a variable without including a newline
          • How do you get help on using the man command?

          Next chapter teaches you on how to navigate.

          "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

          Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the UAntwerpen-HPC for a list of available locations.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

          Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

          To figure out where your quota is being spent, the du (isk sage) command can come in useful:

          $ du -sh test\n59M test\n

          Do not (frequently) run du on directories where large amounts of data are stored, since that will:

          1. take a long time

          2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

          Software is provided through so-called environment modules.

          The most commonly used commands are:

          1. module avail: show all available modules

          2. module avail <software name>: show available modules for a specific software name

          3. module list: show list of loaded modules

          4. module load <module name>: load a particular module

          More information is available in section Modules.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

          The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

          Detailed information is available in section submitting your job.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

          Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

          Hint: python -c \"print(sum(range(1, 101)))\"

          • How many modules are available for Python version 3.6.4?
          • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
          • Which cluster modules are available?

          • What's the full path to your personal home/data/scratch directories?

          • Determine how large your personal directories are.
          • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

          Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

          To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

          $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

          To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
          $ cp source target\n

          This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

          $ cp -r sourceDirectory target\n

          A last more complicated example:

          $ cp -a sourceDirectory target\n

          Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
          $ mkdir directory\n

          which will create a directory with the given name inside the current directory.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
          $ mv source target\n

          mv will move the source path to the destination path. Works for both directories as files.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

          Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

          $ rm filename\n
          rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

          You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

          $ rmdir directory\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

          Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

          1. User - a particular user (account)

          2. Group - a particular group of users (may be user-specific group with only one member)

          3. Other - other users in the system

          The permission types are:

          1. Read - For files, this gives permission to read the contents of a file

          2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

          3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

          Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

          $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

          The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

          Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

          $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

          You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

          You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

          However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

          $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          This will give the user otheruser permissions to write to Project_GoldenDragon

          Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

          Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

          See https://linux.die.net/man/1/setfacl for more information.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

          Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

          $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

          Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

          $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

          Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

          $ unzip myfile.zip\n

          If we would like to make our own zip archive, we use zip:

          $ zip myfiles.zip myfile1 myfile2 myfile3\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

          Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

          You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

          $ tar -xf tarfile.tar\n

          Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

          $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

          Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

          # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

          If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

          $ tar -c source1 source2 source3 -f tarfile.tar\n
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
          1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

          2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

          3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

          4. Remove the another/test directory with a single command.

          5. Rename test to test2. Move test2/hostname.txt to your home directory.

          6. Change the permission of test2 so only you can access it.

          7. Create an empty job script named job.sh, and make it executable.

          8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

          The next chapter is on uploading files, especially important when using HPC-infrastructure.

          "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

          This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

          "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

          To print the current directory, use pwd or \\$PWD:

          $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

          "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

          A very basic and commonly used command is ls, which can be used to list files and directories.

          In its basic usage, it just prints the names of files and directories in the current directory. For example:

          $ ls\nafile.txt some_directory \n

          When provided an argument, it can be used to list the contents of a directory:

          $ ls some_directory \none.txt two.txt\n

          A couple of commonly used options include:

          • detailed listing using ls -l:

            $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • To print the size information in human-readable form, use the -h flag:

            $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • also listing hidden files using the -a flag:

            $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • ordering files by the most recent change using -rt:

            $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

          If you try to use ls on a file that doesn't exist, you will get a clear error message:

          $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
          "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

          To change to a different directory, you can use the cd command:

          $ cd some_directory\n

          To change back to the previous directory you were in, there's a shortcut: cd -

          Using cd without an argument results in returning back to your home directory:

          $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

          "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

          The file command can be used to inspect what type of file you're dealing with:

          $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
          "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

          An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

          Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

          A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

          Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

          There are two special relative paths worth mentioning:

          • . is a shorthand for the current directory
          • .. is a shorthand for the parent of the current directory

          You can also use .. when constructing relative paths, for example:

          $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
          "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

          Each file and directory has particular permissions set on it, which can be queried using ls -l.

          For example:

          $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

          The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

          1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
          2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
          3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
          4. the 3rd part r-- indicates that other users only have read permissions

          The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

          1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
          2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

          See also the chmod command later in this manual.

          "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

          find will crawl a series of directories and lists files matching given criteria.

          For example, to look for the file named one.txt:

          $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

          To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

          $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

          A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

          "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
          • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
          • When was your home directory created or last changed?
          • Determine the name of the last changed file in /tmp.
          • See how home directories are organised. Can you access the home directory of other users?

          The next chapter will teach you how to interact with files and directories.

          "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

          To transfer files from and to the HPC, see the section about transferring files of the HPC manual

          "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

          After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

          For example, you may see an error when submitting a job script that was edited on Windows:

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

          To fix this problem, you should run the dos2unix command on the file:

          $ dos2unix filename\n
          "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

          As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

          $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
          "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

          Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

          1. Open (\"Read\"): ^R

          2. Save (\"Write Out\"): ^O

          3. Exit: ^X

          More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

          "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

          rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

          You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

          For example, to copy a folder with lots of CSV files:

          $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

          will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

          The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

          To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

          To copy files to your local computer, you can also use rsync:

          $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
          This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

          See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

          "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
          1. Download the file /etc/hostname to your local computer.

          2. Upload a file to a subdirectory of your personal $VSC_DATA space.

          3. Create a file named hello.txt and edit it using nano.

          Now you have a basic understanding, see next chapter for some more in depth concepts.

          "}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": ""}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the UAntwerpen-HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the UAntwerpen-HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

          (more info soon)

          "}]} \ No newline at end of file diff --git a/HPC/Antwerpen/macOS/sitemap.xml.gz b/HPC/Antwerpen/macOS/sitemap.xml.gz index 08869d73c70..3f3b09f9049 100644 Binary files a/HPC/Antwerpen/macOS/sitemap.xml.gz and b/HPC/Antwerpen/macOS/sitemap.xml.gz differ diff --git a/HPC/Antwerpen/macOS/useful_linux_commands/index.html b/HPC/Antwerpen/macOS/useful_linux_commands/index.html index 96f145dab3a..2d3534b062e 100644 --- a/HPC/Antwerpen/macOS/useful_linux_commands/index.html +++ b/HPC/Antwerpen/macOS/useful_linux_commands/index.html @@ -1284,7 +1284,7 @@

          How to get started with shell scr
          $ vi foo
           

          or use the following commands:

          -
          echo "echo Hello! This is my hostname:" > foo
          +
          echo "echo 'Hello! This is my hostname:'" > foo
           echo hostname >> foo
           

          The easiest ways to run a script is by starting the interpreter and pass @@ -1309,7 +1309,9 @@

          How to get started with shell scr /bin/bash

          We edit our script and change it with this information:

          -
          #!/bin/bash echo \"Hello! This is my hostname:\" hostname
          +
          #!/bin/bash
          +echo "Hello! This is my hostname:"
          +hostname
           

          Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Antwerpen/sitemap.xml.gz b/HPC/Antwerpen/sitemap.xml.gz index 9e76ef8d6be..7054d92c46e 100644 Binary files a/HPC/Antwerpen/sitemap.xml.gz and b/HPC/Antwerpen/sitemap.xml.gz differ diff --git a/HPC/Gent/Linux/search/search_index.json b/HPC/Gent/Linux/search/search_index.json index 5bb384cf6b6..11e5ffc29a3 100644 --- a/HPC/Gent/Linux/search/search_index.json +++ b/HPC/Gent/Linux/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

          Use the menu on the left to navigate, or use the search box on the top right.

          You are viewing documentation intended for people using Linux.

          Use the OS dropdown in the top bar to switch to a different operating system.

          Quick links

          • Getting Started | Getting Access
          • Recording of HPC-UGent intro
          • Linux Tutorial
          • Hardware overview
          • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
          • FAQ | Troubleshooting | Best practices | Known issues

          If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

          If you still have any questions, you can contact the HPC-UGent team.

          "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

          New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

          If you want to use software that's not yet installed on the HPC, send us a software installation request.

          Overview of HPC-UGent Tier-2 infrastructure

          "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

          An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

          See also: Running batch jobs.

          "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

          When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

          Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

          Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

          If the package or library you want is not available, send us a software installation request.

          "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

          Modules each come with a suffix that describes the toolchain used to install them.

          Examples:

          • AlphaFold/2.2.2-foss-2021a

          • tqdm/4.61.2-GCCcore-10.3.0

          • Python/3.9.5-GCCcore-10.3.0

          • matplotlib/3.4.2-foss-2021a

          Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

          The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          You can use module avail [search_text] to see which versions on which toolchains are available to use.

          If you need something that's not available yet, you can request it through a software installation request.

          It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

          "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

          When incompatible modules are loaded, you might encounter an error like this:

          Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

          You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

          Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

          An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          See also: How do I choose the job modules?

          "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

          The 72 hour walltime limit will not be extended. However, you can work around this barrier:

          • Check that all available resources are being used. See also:
            • How many cores/nodes should I request?.
            • My job is slow.
            • My job isn't using any GPUs.
          • Use a faster cluster.
          • Divide the job into more parallel processes.
          • Divide the job into shorter processes, which you can submit as separate jobs.
          • Use the built-in checkpointing of your software.
          "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

          Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

          When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

          Try requesting a bit more memory than your proportional share, and see if that solves the issue.

          See also: Specifying memory requirements.

          "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

          When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

          It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

          See also: Running interactive jobs.

          "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

          Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

          Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

          See also: HPC-UGent GPU clusters.

          "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

          There are a few possible causes why a job can perform worse than expected.

          Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

          Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

          Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

          "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

          Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

          To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

          See also: Multi core jobs/Parallel Computing and Mympirun.

          "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

          For example, we have a simple script (./hello.sh):

          #!/bin/bash \necho \"hello world\"\n

          And we run it like mympirun ./hello.sh --output output.txt.

          To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

          mympirun --output output.txt ./hello.sh\n
          "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

          See the explanation about how jobs get prioritized in When will my job start.

          "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

          When trying to create files, errors like this can occur:

          No space left on device\n

          The error \"No space left on device\" can mean two different things:

          • all available storage quota on the file system in question has been used;
          • the inode limit has been reached on that file system.

          An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

          Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

          If the problem persists, feel free to contact support.

          "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

          NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

          See https://helpdesk.ugent.be/account/en/regels.php.

          If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

          "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

          Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

          $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

          For more information about chmod or setfacl, see Linux tutorial.

          "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

          Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

          "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

          Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

          If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

          "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

          On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

          MacOS & Linux (on Windows, only the second part is shown):

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

          Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

          "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

          A Virtual Organisation consists of a number of members and moderators. A moderator can:

          • Manage the VO members (but can't access/remove their data on the system).

          • See how much storage each member has used, and set limits per member.

          • Request additional storage for the VO.

          One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

          See also: Virtual Organisations.

          "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

          After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

          See also: Your UGent home drive and shares.

          "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

          Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

          du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

          The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

          The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

          "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

          By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

          You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

          "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

          When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

          sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

          A lot of tasks can be performed without sudo, including installing software in your own account.

          Installing software

          • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
          • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
          "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

          Who can I contact?

          • General questions regarding HPC-UGent and VSC: hpc@ugent.be

          • HPC-UGent Tier-2: hpc@ugent.be

          • VSC Tier-1 compute: compute@vscentrum.be

          • VSC Tier-1 cloud: cloud@vscentrum.be

          "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

          Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

          "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

          The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

          "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

          Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

          module load hod\n
          "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

          The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

          As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

          For example, this will work as expected:

          $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

          Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

          "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

          The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

          $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

          By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

          If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

          Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

          "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

          After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

          These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

          You should occasionally clean this up using hod clean:

          $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
          Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

          "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

          If you have any questions, or are experiencing problems using HOD, you have a couple of options:

          • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

          • Contact the HPC-UGent team via hpc@ugent.be

          • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

          "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

          Note

          To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

          Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

          "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

          The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

          Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

          Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

          "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

          Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

          To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

          $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

          After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

          To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

          First, we copy the magicsquare.m example that comes with MATLAB to example.m:

          cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

          To compile a MATLAB program, use mcc -mv:

          mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
          "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

          To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

          It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

          For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

          "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

          If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

          export _JAVA_OPTIONS=\"-Xmx64M\"\n

          The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

          Another possible issue is that the heap size is too small. This could result in errors like:

          Error: Out of memory\n

          A possible solution to this is by setting the maximum heap size to be bigger:

          export _JAVA_OPTIONS=\"-Xmx512M\"\n
          "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

          MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

          The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

          You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

          parpool.m
          % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

          See also the parpool documentation.

          "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

          Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

          MATLAB_LOG_DIR=<OUTPUT_DIR>\n

          where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

          # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

          You should remove the directory at the end of your job script:

          rm -rf $MATLAB_LOG_DIR\n
          "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

          When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

          The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

          export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

          So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

          "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

          All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

          jobscript.sh
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
          "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

          VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

          Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

          Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

          "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

          First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

          $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

          When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

          Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

          It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

          "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

          You can get a list of running VNC servers on a node with

          $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

          This only displays the running VNC servers on the login node you run the command on.

          To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

          $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

          This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

          "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

          The VNC server runs on a (in the example above, on gligar07.gastly.os).

          In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

          Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

          To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

          The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

          "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

          The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

          The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

          So, in our running example, both the source and destination ports are 5906.

          "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

          In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

          If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

          In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

          To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

          Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

          In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

          We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

          "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

          First, we will set up the SSH tunnel from our workstation to .

          Use the settings specified in the sections above:

          • source port: the port on which the VNC server is running (see Determining the source/destination port);

          • destination host: localhost;

          • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

          Execute the following command to set up the SSH tunnel.

          ssh -L 5906:localhost:12345  vsc40000@login.hpc.ugent.be\n

          Replace the source port 5906, destination port 12345 and user ID vsc40000 with your own!

          With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

          Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

          "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

          Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

          You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

          netstat -an | grep -i listen | grep tcp | grep 12345\n

          If you see no matching lines, then the port you picked is still available, and you can continue.

          If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

          $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
          "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

          In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

          To do this, run the following command:

          $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

          With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

          Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

          **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

          As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

          "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

          Download and setup a VNC client. A good choice is tigervnc. You can start it with the vncviewer command.

          Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

          When prompted for a password, use the password you used to setup the VNC server.

          When prompted for default or empty panel, choose default.

          If you have an empty panel, you can reset your settings with the following commands:

          xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
          "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

          The VNC server can be killed by running

          vncserver -kill :6\n

          where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

          "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

          You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

          "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

          All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

          See HPC policies for more information on who is entitled to an account.

          The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

          There are two methods for connecting to HPC-UGent infrastructure:

          • Using a terminal to connect via SSH.
          • Using the web portal

          The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

          If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

          The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

          "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
          • an SSH public/private key pair can be seen as a lock and a key

          • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

          • the SSH private key is like a physical key: you don't hand it out to other people.

          • anyone who has the key (and the optional password) can unlock the door and log in to the account.

          • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

          Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). Launch a terminal from your desktop's application menu and you will see the bash shell. There are other shells, but most Linux distributions use bash by default.

          "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

          Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

          \"Secure\" means that:

          1. the User is authenticated to the System; and

          2. the System is authenticated to the User; and

          3. all data is encrypted during transfer.

          OpenSSH is a FREE implementation of the SSH connectivity protocol. Linux comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

          $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

          To access the clusters and transfer your files, you will use the following commands:

          1. ssh-keygen: to generate the SSH key pair (public + private key);

          2. ssh: to open a shell on a remote machine;

          3. sftp: a secure equivalent of ftp;

          4. scp: a secure equivalent of the remote copy command rcp.

          "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

          A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

          ls ~/.ssh\n

          If a key-pair is already available, you would normally get:

          authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

          Otherwise, the command will show:

          ls: .ssh: No such file or directory\n

          You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

          You will need to generate a new key pair, when:

          1. you don't have a key pair yet

          2. you forgot the passphrase protecting your private key

          3. your private key was compromised

          4. your key pair is too short or not the right type

          For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

          ssh-keygen -t rsa -b 4096\n

          This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

          Without your key pair, you won't be able to apply for a personal VSC account.

          "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

          Most recent Unix derivatives include by default an SSH agent (\"gnome-keyring-daemon\" in most cases) to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

          Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

          ssh-add\n

          Tip

          Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

          Check that your key is available from the keyring with:

          ssh-add -l\n

          After these changes the key agent will keep your SSH key to connect to the clusters as usual.

          Tip

          You should execute ssh-add command again if you generate a new SSH key.

          Visit https://wiki.gnome.org/Projects/GnomeKeyring/Ssh for more information.

          "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

          Visit https://account.vscentrum.be/

          You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

          Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

          Click Confirm

          You will now be taken to the authentication page of your institute.

          You will now have to log in with CAS using your UGent account.

          You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

          After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

          This file has been stored in the directory \"~/.ssh/\".

          After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

          "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

          Within one day, you should receive a Welcome e-mail with your VSC account details.

          Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

          Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

          "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

          In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

          1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

          2. Go to https://account.vscentrum.be/django/account/edit

          3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

          4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

          5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

          "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

          A typical Computation workflow will be:

          1. Connect to the HPC

          2. Transfer your files to the HPC

          3. Compile your code and test it

          4. Create a job script

          5. Submit your job

          6. Wait while

            1. your job gets into the queue

            2. your job gets executed

            3. your job finishes

          7. Move your results

          We'll take you through the different tasks one by one in the following chapters.

          "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

          AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

          See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

          "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

          This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

          • AlphaFold website: https://alphafold.com/
          • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
          • AlphaFold FAQ: https://alphafold.com/faq
          • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
          • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
          • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
            • recording available on YouTube
            • slides available here (PDF)
            • see also https://www.vscentrum.be/alphafold
          "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

          Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

          $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

          To use AlphaFold, you should load a particular module, for example:

          module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

          We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

          Warning

          When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

          Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

          $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

          The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

          As of writing this documentation the latest version is 20230310.

          Info

          The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

          The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

          "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

          The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

          export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

          Use newest version

          Do not forget to replace 20230310 with a more up to date version if available.

          "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

          AlphaFold provides a script called run_alphafold.py

          A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

          The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

          Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

          For more information about the script and options see this section in the official README.

          READ README

          It is strongly advised to read the official README provided by DeepMind before continuing.

          "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

          The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

          Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

          Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

          Info

          Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

          "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

          The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

          Using --db_preset=full_dbs, the following runtime data was collected:

          • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
          • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
          • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
          • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

          This highlights a couple of important attention points:

          • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
          • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
          • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

          With --db_preset=casp14, it is clearly more demanding:

          • On doduo, with 24 cores (1 node): still running after 48h...
          • On joltik, 1 V100 GPU + 8 cores: 4h 48min

          This highlights the difference between CPU and GPU performance even more.

          "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

          The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

          Do not forget to set up the environment (see above: Setting up the environment).

          "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

          Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

          >sequence_name\n<SEQUENCE>\n

          Then run the following command in the same directory:

          alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

          See AlphaFold output, for information about the outputs.

          Info

          For more scenarios see the example section in the official README.

          "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

          The following two example job scripts can be used as a starting point for running AlphaFold.

          The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

          To run the job scripts you need to create a file named T1050.fasta with the following content:

          >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
          source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

          "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

          Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

          Swap to the joltik GPU before submitting it:

          module swap cluster/joltik\n
          AlphaFold-gpu-joltik.sh
          #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
          "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

          Jobscript that runs AlphaFold on CPU using 24 cores on one node.

          AlphaFold-cpu-doduo.sh
          #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

          In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

          "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

          Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

          This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

          "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

          The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via hpc@ugent.be.

          "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

          Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

          Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

          # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
          "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

          Create a job script like:

          #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

          Create an example myscript.sh:

          #!/bin/bash\n\n# prime factors\nfactor 1234567\n
          "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
          #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before apptainer execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

          For example to compile an MPI example:

          module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

          Example MPI job script:

          #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
          "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
          1. Before starting, you should always check:

            • Are there any errors in the script?

            • Are the required modules loaded?

            • Is the correct executable used?

          2. Check your computer requirements upfront, and request the correct resources in your batch job script.

            • Number of requested cores

            • Amount of requested memory

            • Requested network type

          3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

          4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

          5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

          6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

          7. Submit your job and wait (be patient) ...

          8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

          9. The runtime is limited by the maximum walltime of the queues.

          10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

          11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

          12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

          "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

          All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

          Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

          "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

          In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

          "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

          To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

          In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

          In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

          Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

          Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

          Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

          "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

          Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

          All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

          A typical process looks like:

          1. Copy your software to the login-node of the HPC

          2. Start an interactive session on a compute node;

          3. Compile it;

          4. Test it locally;

          5. Generate your job scripts;

          6. Test it on the HPC

          7. Run it (in parallel);

          We assume you've copied your software to the HPC. The next step is to request your private compute node.

          $ qsub -I\nqsub: waiting for job 123456 to start\n
          "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

          Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

          We now list the directory and explore the contents of the \"hello.c\" program:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

          The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

          We first need to compile this C-file into an executable with the gcc-compiler.

          First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

          $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

          A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

          Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

          $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

          It seems to work, now run it on the HPC

          qsub hello.pbs\n

          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          List the directory and explore the contents of the \"mpihello.c\" program:

          $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          mpihello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

          The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

          Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

          mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

          A new file \"hello\" has been created. Note that this program has \"execute\" rights.

          Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n
          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

          We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

          module purge\nmodule load intel\n

          Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

          mpiicc -o mpihello mpihello.c\nls -l\n

          Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n

          Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

          Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

          Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

          Before you can really start using the HPC clusters, there are several things you need to do or know:

          1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

          2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

          3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

          4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

          "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

          Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

          VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

          All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

          • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

          • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

            • While this web connection is active new SSH sessions can be started.

            • Active SSH sessions will remain active even when this web page is closed.

          • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

          Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

          ssh_exchange_identification: read: Connection reset by peer\n
          "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

          The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

          If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

          "}, {"location": "connecting/#connect", "title": "Connect", "text": "

          Open up a terminal and enter the following command to connect to the HPC.

          ssh vsc40000@login.hpc.ugent.be\n

          Here, user vsc40000 wants to make a connection to the \"hpcugent\" cluster at UGent via the login node \"login.hpc.ugent.be\", so replace vsc40000 with your own VSC id in the above command.

          The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

          A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

          Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          In this case, use the -i option for the ssh command to specify the location of your private key. For example:

          ssh -i /home/example/my_keys\n

          Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

          $ pwd\n/user/home/gent/vsc400/vsc40000\n

          Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

          $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

          This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

          You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

          As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

          $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

          This directory contains:

          1. This HPC Tutorial (in either a Mac, Linux or Windows version).

          2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

          cd examples\n

          Tip

          Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

          Tip

          For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

          The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Go to your home directory, check your own private examples directory, ...\u00a0and start working.

          cd\nls -l\n

          Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

          Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

          You can exit the connection at anytime by entering:

          $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

          tip: Setting your Language right

          You may encounter a warning message similar to the following one during connecting:

          perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
          or any other error message complaining about the locale.

          This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

          LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

          A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

          Note

          If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

          Open the .bashrc on your local machine with your favourite editor and add the following lines:

          $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

          tip: vi

          To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

          or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

          echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

          You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

          "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

          Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. Linux ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

          Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

          It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

          Open an additional terminal window and check that you're working on your local machine.

          $ hostname\n<local-machine-name>\n

          If you're still using the terminal that is connected to the HPC, close the connection by typing \"exit\" in the terminal window.

          For example, we will copy the (local) file \"localfile.txt\" to your home directory on the HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc40000\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc40000@login.hpc.ugent.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

          $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc40000@login.hpc.ugent.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

          Connect to the HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

          $ pwd\n/user/home/gent/vsc400/vsc40000\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

          The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-Linux-Gent.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

          First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

          $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc40000 Sep 11 09:53 intro-HPC-Linux-Gent.pdf\n

          Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

          $ scp vsc40000@login.hpc.ugent.be:./docs/intro-HPC-Linux-Gent.pdf .\nintro-HPC-Linux-Gent.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

          The file has been copied from the HPC to your local computer.

          It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

          scp -r dataset vsc40000@login.hpc.ugent.be:scratch\n

          If you don't use the -r option to copy a directory, you will run into the following error:

          $ scp dataset vsc40000@login.hpc.ugent.be:scratch\ndataset: not a regular file\n
          "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

          The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

          The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

          One easy way of starting a sftp session is

          sftp vsc40000@login.hpc.ugent.be\n

          Typical and popular commands inside an sftp session are:

          cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the HPC remote machine) ls Get a list of the files in the current directory on the HPC. get fibo.py Copy the file \"fibo.py\" from the HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the HPC."}, {"location": "connecting/#using-a-gui", "title": "Using a GUI", "text": "

          If you prefer a GUI to transfer files back and forth to the HPC, you can use your file browser. Open your file browser and press Ctrl+l

          This should open up a address bar where you can enter a URL. Alternatively, look for the \"connect to server\" option in your file browsers menu.

          Enter: sftp://vsc40000@login.hpc.ugent.be/ and press enter.

          You should now be able to browse files on the HPC in your file browser.

          "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

          See the section on rsync in chapter 5 of the Linux intro manual.

          "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

          It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

          For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

          ssh gligar07.gastly.os\n
          This is also possible the other way around.

          If you want to find out which login host you are connected to, you can use the hostname command.

          $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

          Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

          • screen
          • tmux
          "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

          It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

          In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

          Check if any cron script is already set in the current login node with:

          crontab -l\n

          At this point you can add/edit (with vi editor) any cron script running the command:

          crontab -e\n
          "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
           15 5 * * * ~/runscript.sh >& ~/job.out\n

          where runscript.sh has these lines in this example:

          runscript.sh
          #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

          In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

          Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

          ssh gligar07    # or gligar08\n
          "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

          You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

          EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

          "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

          For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

          • applying custom patches to the software that only you or your group are using

          • evaluating new software versions prior to requesting a central software installation

          • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

          "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

          Before you use EasyBuild, you need to configure it:

          "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

          This is where EasyBuild can find software sources:

          EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
          • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

          • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

          "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

          This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

          export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

          On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

          "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

          This is where EasyBuild will install the software (and accompanying modules) to.

          For example, to let it use $VSC_DATA/easybuild, use:

          export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

          Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

          Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

          To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

          "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

          Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

          module load EasyBuild\n
          "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

          EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

          $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

          For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

          eb example-1.2.1-foss-2024a.eb --robot\n
          "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

          To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

          To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

          eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

          To try to install example v1.2.5 with a different compiler toolchain:

          eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
          "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

          To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

          "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

          To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

          module use $EASYBUILD_INSTALLPATH/modules/all\n

          It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

          "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

          As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

          Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

          Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

          There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

          Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

          Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

          This chapter shows you how to measure:

          1. Walltime
          2. Memory usage
          3. CPU usage
          4. Disk (storage) needs
          5. Network bottlenecks

          First, we allocate a compute node and move to our relevant directory:

          qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

          One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

          The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

          Test the time command:

          $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

          It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

          It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

          The walltime can be specified in a job scripts as:

          #PBS -l walltime=3:00:00:00\n

          or on the command line

          qsub -l walltime=3:00:00:00\n

          It is recommended to always specify the walltime for a job.

          "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

          In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

          "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

          The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

          $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

          Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

          It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

          On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

          "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

          To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

          top

          provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

          htop

          is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

          "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

          Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

          The maximum amount of physical memory used by the job per node can be specified in a job script as:

          #PBS -l mem=4gb\n

          or on the command line

          qsub -l mem=4gb\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

          Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

          "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

          The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

          The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

          $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

          Or if you want to see it in a more readable format, execute:

          $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

          Note

          Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

          In order to specify the number of nodes and the number of processors per node in your job script, use:

          #PBS -l nodes=N:ppn=M\n

          or with equivalent parameters on the command line

          qsub -l nodes=N:ppn=M\n

          This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

          You can also use this statement in your job script:

          #PBS -l nodes=N:ppn=all\n

          to request all cores of a node, or

          #PBS -l nodes=N:ppn=half\n

          to request half of them.

          Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

          This could also be monitored with the htop command:

          htop\n
          Example output:
            1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

          The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

          If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
          2. Develop your parallel program in a smart way.
          3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          4. Correct your request for CPUs in your job script.
          "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

          On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

          The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

          The load averages differ from CPU percentage in two significant ways:

          1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
          2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
          "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

          What is the \"optimal load\" rule of thumb?

          The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

          In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

          Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

          The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

          1. When you are running computational intensive applications, one application per processor will generate the optimal load.
          2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

          The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

          The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

          The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

          The uptime command will show us the average load

          $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

          Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

          $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
          You can also read it in the htop command.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Profile your software to improve its performance.
          2. Configure your software (e.g., to exactly use the available amount of processors in a node).
          3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
          4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          5. Correct your request for CPUs in your job script.

          And then check again.

          "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

          Some programs generate intermediate or output files, the size of which may also be a useful metric.

          Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

          It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

          Several actions can be taken, to avoid storage problems:

          1. Be aware of all the files that are generated by your program. Also check out the hidden files.
          2. Check your quota consumption regularly.
          3. Clean up your files regularly.
          4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
          5. Make sure your programs clean up their temporary files after execution.
          6. Move your output results to your own computer regularly.
          7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
          "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

          Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

          Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

          The parameter to add in your job script would be:

          #PBS -l ib\n

          If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

          #PBS -l gbe\n
          "}, {"location": "getting_started/", "title": "Getting Started", "text": "

          Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

          In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

          Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

          "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

          To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

          If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

          "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
          1. Connect to the login nodes
          2. Transfer your files to the HPC-UGent infrastructure
          3. Optional: compile your code and test it
          4. Create a job script and submit your job
          5. Wait for job to be executed
          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

          "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

          There are two options to connect

          • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
          • Using the web portal

          Considering your operating system is Linux, it is recommended to make use of the ssh command in a terminal to get the most flexibility.

          Assuming you have already generated SSH keys in the previous step (Getting Access), and that they are in a default location, you should now be able to login by running the following command:

          ssh vsc40000@login.hpc.ugent.be\n

          User your own VSC account id

          Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

          Tip

          You can also still use the web portal (see shell access on web portal)

          Info

          When having problems see the connection issues section on the troubleshooting page.

          "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

          Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

          Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

          On your local machine you can run:

          curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

          Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

          scp tensorflow_mnist.py run.sh vsc40000login.hpc.ugent.be:~\n

          ssh  vsc40000@login.hpc.ugent.be\n

          User your own VSC account id

          Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

          Info

          For more information about transfering files or scp, see tranfer files from/to hpc.

          When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

          $ ls ~\nrun.sh tensorflow_mnist.py\n

          When you do not see these files, make sure you uploaded the files to your home directory.

          "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

          Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

          A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

          Our job script looks like this:

          run.sh

          #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
          As you can see this job script will run the Python script named tensorflow_mnist.py.

          The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

          module swap cluster/donphan\n

          Tip

          When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

          To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

          $ qsub run.sh\n123456\n

          This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

          Make sure you understand what the module command does

          Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

          It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

          When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

          For detailed information about module commands, read the running batch jobs chapter.

          "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

          Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

          You can get an overview of the active jobs using the qstat command:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

          Eventually, after entering qstat again you should see that your job has started running:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

          If you don't see your job in the output of the qstat command anymore, your job has likely completed.

          Read this section on how to interpret the output.

          "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

          When your job finishes it generates 2 output files:

          • One for normal output messages (stdout output channel).
          • One for warning and error messages (stderr output channel).

          By default located in the directory where you issued qsub.

          Info

          For more information about the stdout and stderr output channels, see this section.

          In our example when running ls in the current directory you should see 2 new files:

          • run.sh.o123456, containing normal output messages produced by job 123456;
          • run.sh.e123456, containing errors and warnings produced by job 123456.

          Info

          run.sh.e123456 should be empty (no errors or warnings).

          Use your own job ID

          Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

          When examining the contents of run.sh.o123456 you will see something like this:

          Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

          Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

          Warning

          When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

          For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

          "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
          • Running interactive jobs
          • Running jobs with input/output data
          • Multi core jobs/Parallel Computing
          • Interactive and debug cluster

          For more examples see Program examples and Job script examples

          "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

          module swap cluster/joltik\n

          To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

          module swap cluster/accelgor\n

          Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

          "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

          To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

          Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

          "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

          See https://www.ugent.be/hpc/en/infrastructure.

          "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

          There are 2 main ways to ask for GPUs as part of a job:

          • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

          • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

          Some background:

          • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

          • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

          "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

          Some important attention points:

          • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

          • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

          • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

          • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

          "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

          Use module avail to check for centrally installed software.

          The subsections below only cover a couple of installed software packages, more are available.

          "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

          Please consult module avail GROMACS for a list of installed versions.

          "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

          Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

          Please consult module avail Horovod for a list of installed versions.

          Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

          At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

          "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

          Please consult module avail PyTorch for a list of installed versions.

          "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

          Please consult module avail TensorFlow for a list of installed versions.

          Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

          "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
          #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
          "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

          Please consult module avail AlphaFold for a list of installed versions.

          For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

          "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

          In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

          "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

          The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

          This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

          Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

          • Interactive jobs (see chapter\u00a0Running interactive jobs)

          • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

          • Jobs requiring few resources

          • Debugging programs

          • Testing and debugging job scripts

          "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

          module swap cluster/donphan\n

          Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

          "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

          Some limits are in place for this cluster:

          • each user may have at most 5 jobs in the queue (both running and waiting to run);

          • at most 3 jobs per user can be running at the same time;

          • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

          In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

          Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

          "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

          Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

          All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

          "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

          \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

          While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

          A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

          The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

          Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

          Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

          "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

          The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

          The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

          The HPC currently consists of:

          a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

          Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

          "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

          The HPC infrastructure is not a magic computer that automatically:

          1. runs your PC-applications much faster for bigger problems;

          2. develops your applications;

          3. solves your bugs;

          4. does your thinking;

          5. ...

          6. allows you to play games even faster.

          The HPC does not replace your desktop computer.

          "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

          Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

          It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

          "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

          In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

          Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

          "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

          Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

          Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

          The two parallel programming paradigms most used in HPC are:

          • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

          • MPI for distributed memory systems (multiprocessing): on multiple nodes

          Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

          "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

          Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

          It is perfectly possible to also run purely sequential programs on the HPC.

          Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

          "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

          You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

          Supported and commonly used compilers are GCC and Intel.

          Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

          "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

          All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

          A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

          "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

          A typical workflow looks like:

          1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

          2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

          3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

          4. Create a job script and submit your job (see Running batch jobs)

          5. Get some coffee and be patient:

            1. Your job gets into the queue

            2. Your job gets executed

            3. Your job finishes

          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

          When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

          Do not hesitate to contact the HPC staff for any help.

          1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

          "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

          This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

          • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

          • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

          • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

          • To use a situational parameter, remove one '#' at the beginning of the line.

          simple_jobscript.sh
          #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
          "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

          Here's an example of a single-core job script:

          single_core.sh
          #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
          1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

          2. A module for Python 3.6 is loaded, see also section Modules.

          3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

          4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

          5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

          "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

          Here's an example of a multi-core job script that uses mympirun:

          multi_core.sh
          #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

          An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

          "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

          If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

          This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

          timeout.sh
          #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

          The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

          example_program.sh
          #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
          "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

          A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

          "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

          Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

          After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

          When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

          and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

          This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

          "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

          A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

          To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

          We can see all available versions of the SciPy module by using module avail SciPy-bundle:

          $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

          Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

          Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

          $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

          The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

          It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
          This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

          If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

          Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

          "}, {"location": "known_issues/", "title": "Known issues", "text": "

          This page provides details on a couple of known problems, and the workarounds that are available for them.

          If you have any questions related to these issues, please contact the HPC-UGent team.

          • Operation not permitted error for MPI applications
          "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

          When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

          Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

          This error means that an internal problem has occurred in OpenMPI.

          "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

          This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

          It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

          "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

          We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

          "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

          A workaround as been implemented in mympirun (version 5.4.0).

          Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

          module load vsc-mympirun\n

          and launch your MPI application using the mympirun command.

          For more information, see the mympirun documentation.

          "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

          If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

          export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
          "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

          We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

          "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

          There are two important motivations to engage in parallel programming.

          1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

          2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

          On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

          There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

          Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

          Tip

          You can request more nodes/cores by adding following line to your run script.

          #PBS -l nodes=2:ppn=10\n
          This queues a job that claims 2 nodes and 10 cores.

          Warning

          Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

          Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

          This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

          Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

          Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

          Go to the example directory:

          cd ~/examples/Multi-core-jobs-Parallel-Computing\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Study the example first:

          T_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

          And compile it (whilst including the thread library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Now, run it on the cluster and check the output:

          $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Tip

          If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

          OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

          An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

          Here is the general code structure of an OpenMP program:

          #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

          "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

          By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

          "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

          Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

          omp1.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

          Now run it in the cluster and check the result again.

          $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
          "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

          Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

          omp2.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

          Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

          omp3.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

          There are a host of other directives you can issue using OpenMP.

          Some other clauses of interest are:

          1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

          2. nowait: threads will not wait until everybody is finished

          3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

          4. if: allows you to parallelise only if a certain condition is met

          5. ...\u00a0and a host of others

          Tip

          If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

          The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

          In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

          The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

          The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

          One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

          Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

          Study the MPI-programme and the PBS-file:

          mpi_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
          mpi_hello.pbs
          #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

          and compile it:

          $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

          mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

          Run the parallel program:

          $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

          The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

          MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

          Tip

          mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

          You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

          Tip

          If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

          "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

          A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

          Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

          These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

          One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

          The \"Worker framework\" has been developed to address this issue.

          It can handle many small jobs determined by:

          parameter variations

          i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

          job arrays

          i.e., each individual job got a unique numeric identifier.

          Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

          However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

          "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/par_sweep\n

          Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

          $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

          For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

          par_sweep/weather
          #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

          A job script that would run this as a job for the first parameters (p01) would then look like:

          par_sweep/weather_p01.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

          When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

          To submit the job, the user would use:

           $ qsub weather_p01.pbs\n
          However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

          $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

          It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

          In order to make our PBS generic, the PBS file can be modified as follows:

          par_sweep/weather.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

          Note that:

          1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

          2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

          3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

          The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

          The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

          $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

          Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

          Warning

          When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

          module swap env/slurm/donphan\n

          instead of

          module swap cluster/donphan\n
          We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

          "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/job_array\n

          As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

          The following bash script would submit these jobs all one by one:

          #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

          This, as said before, could be disturbing for the job scheduler.

          Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

          Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

          The details are

          1. a job is submitted for each number in the range;

          2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

          3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

          The job could have been submitted using:

          qsub -t 1-100 my_prog.pbs\n

          The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

          To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

          A typical job script for use with job arrays would look like this:

          job_array/job_array.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

          Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

          $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

          For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

          job_array/test_set
          #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

          Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

          job_array/test_set.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          Note that

          1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

          2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

          The job is now submitted as follows:

          $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

          The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

          Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

          $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
          "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

          Often, an embarrassingly parallel computation can be abstracted to three simple steps:

          1. a preparation phase in which the data is split up into smaller, more manageable chunks;

          2. on these chunks, the same algorithm is applied independently (these are the work items); and

          3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

          The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

          cd ~/examples/Multi-job-submission/map_reduce\n

          The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

          First study the scripts:

          map_reduce/pre.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
          map_reduce/post.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

          Then one can submit a MapReduce style job as follows:

          $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

          Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

          "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

          The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

          The \"Worker Framework\" will be effective when

          1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

          2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

          "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

          Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

          tail -f run.pbs.log123456\n

          Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

          watch -n 60 wsummarize run.pbs.log123456\n

          This will summarise the log file every 60 seconds.

          "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

          Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

          Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

          Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

          "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

          Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

          wresume -jobid 123456\n

          This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

          wresume -l walltime=1:30:00 -jobid 123456\n

          Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

          wresume -jobid 123456 -retry\n

          By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

          "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

          This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

          $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
          "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

          When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

          to check for the available versions of worker, use the following command:

          $ module avail worker\n
          1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

          "}, {"location": "mympirun/", "title": "Mympirun", "text": "

          mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

          In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

          "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

          Before using mympirun, we first need to load its module:

          module load vsc-mympirun\n

          As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

          The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

          For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

          "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

          There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

          By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

          "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

          This is the most commonly used option for controlling the number of processing.

          The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

          $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
          "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

          There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

          See vsc-mympirun README for a detailed explanation of these options.

          "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

          You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

          $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
          "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

          In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

          "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

          There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

          • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

            • see also http://openfoam.com/history/
          • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

            • see also https://openfoam.org/download/history/
          • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

          Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

          "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

          The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

          • OpenFOAM websites:

            • https://openfoam.com

            • https://openfoam.org

            • http://wikki.gridcore.se/foam-extend

          • OpenFOAM user guides:

            • https://www.openfoam.com/documentation/user-guide

            • https://cfd.direct/openfoam/user-guide/

          • OpenFOAM C++ source code guide: https://cpp.openfoam.org

          • tutorials: https://wiki.openfoam.com/Tutorials

          • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

          Other useful OpenFOAM documentation:

          • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

          • http://www.dicat.unige.it/guerrero/openfoam.html

          "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

          To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

          "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

          First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

          $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

          To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

          To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

          module load OpenFOAM/11-foss-2023a\n
          "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

          OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

          source $FOAM_BASH\n
          "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

          If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

          source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

          Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

          "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

          If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

          unset $FOAM_SIGFPE\n

          Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

          As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

          "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

          The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

          • generate the mesh;

          • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

          After running the simulation, some post-processing steps are typically performed:

          • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

          • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

          Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

          Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

          One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

          For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

          "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

          For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

          "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

          When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

          You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

          "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

          It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

          See Basic usage for how to get started with mympirun.

          To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

          export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

          Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

          "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

          To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

          Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

          number of processor directories = 4 is not equal to the number of processors = 16\n

          In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

          • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

          • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

          See Controlling number of processes to control the number of process mympirun will start.

          This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

          To visualise the processor domains, use the following command:

          mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

          and then load the VTK files generated in the VTK folder into ParaView.

          "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

          OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

          Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

          • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

          • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

          • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

          • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

          • if the results per individual time step are large, consider setting writeCompression to true;

          For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

          For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

          These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

          "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

          See https://cfd.direct/openfoam/user-guide/compiling-applications/.

          "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

          Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

          OpenFOAM_damBreak.sh
          #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
          "}, {"location": "program_examples/", "title": "Program examples", "text": "

          If you have not done so already copy our examples to your home directory by running the following command:

           cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          ~(tilde) refers to your home directory, the directory you arrive by default when you login.

          Go to our examples:

          cd ~/examples/Program-examples\n

          Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

          1. 01_Python

          2. 02_C_C++

          3. 03_Matlab

          4. 04_MPI_C

          5. 05a_OMP_C

          6. 05b_OMP_FORTRAN

          7. 06_NWChem

          8. 07_Wien2k

          9. 08_Gaussian

          10. 09_Fortran

          11. 10_PQS

          The above 2 OMP directories contain the following examples:

          C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

          Compile by any of the following commands:

          Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

          Be invited to explore the examples.

          "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

          Remember to substitute the usernames, login nodes, file names, ...for your own.

          Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

          Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

          "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

          Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

          This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

          It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

          "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

          As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

          For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

          $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

          Initially there will be only one RHEL 9 login node. As needed a second one will be added.

          When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

          "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

          To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

          This includes (per user):

          • max. of 2 CPU cores in use
          • max. 8 GB of memory in use

          For more intensive tasks you can use the interactive and debug clusters through the web portal.

          "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

          The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

          However, there will be impact on the availability of software that is made available via modules.

          Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

          This includes all software installations on top of a compiler toolchain that is older than:

          • GCC(core)/12.3.0
          • foss/2023a
          • intel/2023a
          • gompi/2023a
          • iimpi/2023a
          • gfbf/2023a

          (or another toolchain with a year-based version older than 2023a)

          The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

          foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

          If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

          It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

          "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

          We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

          cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

          Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

          We will keep this page up to date when more specific dates have been planned.

          Warning

          This planning below is subject to change, some clusters may get migrated later than originally planned.

          Please check back regularly.

          "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

          If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

          "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

          In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

          When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

          The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

          "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

          Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

          "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

          The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

          All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

          "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

          In order to administer the active software and their environment variables, the module system has been developed, which:

          1. Activates or deactivates software packages and their dependencies.

          2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

          3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

          4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

          5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

          This is all managed with the module command, which is explained in the next sections.

          There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

          "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

          A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

          module available\n

          It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

          This will give some output such as:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          This gives a full list of software packages that can be loaded.

          The casing of module names is important: lowercase and uppercase letters matter in module names.

          "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

          The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

          Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

          E.g., foss/2024a is the first version of the foss toolchain in 2024.

          The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

          "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

          To \"activate\" a software package, you load the corresponding module file using the module load command:

          module load example\n

          This will load the most recent version of example.

          For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

          **However, you should specify a particular version to avoid surprises when newer versions are installed:

          module load secondexample/2.7-intel-2016b\n

          The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

          Modules need not be loaded one by one; the two module load commands can be combined as follows:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

          "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

          Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

          $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          You can also just use the ml command without arguments to list loaded modules.

          It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

          "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

          To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

          $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

          To unload the secondexample module, you can also use ml -secondexample.

          Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

          "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

          In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

          module purge\n
          This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

          "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

          Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

          Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

          module load example\n

          rather than

          module load example/1.2.3\n

          Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

          Consider the following example modules:

          $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

          Let's now generate a version conflict with the example module, and see what happens.

          $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

          Note: A module swap command combines the appropriate module unload and module load commands.

          "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

          With the module spider command, you can search for modules:

          $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

          It's also possible to get detailed information about a specific module:

          $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
          "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

          To get a list of all possible commands, type:

          module help\n

          Or to get more information about one specific module package:

          $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
          "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

          If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

          In each module command shown below, you can replace module with ml.

          First, load all modules you want to include in the collections:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          Now store it in a collection using module save. In this example, the collection is named my-collection.

          module save my-collection\n

          Later, for example in a jobscript or a new session, you can load all these modules with module restore:

          module restore my-collection\n

          You can get a list of all your saved collections with the module savelist command:

          $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

          To get a list of all modules a collection will load, you can use the module describe command:

          $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          To remove a collection, remove the corresponding file in $HOME/.lmod.d:

          rm $HOME/.lmod.d/my-collection\n
          "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

          To see how a module would change the environment, you can use the module show command:

          $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

          It's also possible to use the ml show command instead: they are equivalent.

          Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

          You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

          If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

          "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

          To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

          "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

          You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

          You can also get this information in text form (per cluster separately) with the pbsmon command:

          $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

          pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

          "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

          Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

          As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

          Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

          cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          First go to the directory with the first examples by entering the command:

          cd ~/examples/Running-batch-jobs\n

          Each time you want to execute a program on the HPC you'll need 2 things:

          The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

          A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

          1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

          Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

          List and check the contents with:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

          In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

          1. The Perl script calculates the first 30 Fibonacci numbers.

          2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

          We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

          On the command line, you would run this using:

          $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

          Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

          The job script contains a description of the job by specifying the command that need to be executed on the compute node:

          fibo.pbs
          #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

          $ qsub fibo.pbs\n123456\n

          The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

          Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

          To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

          Your job is now waiting in the queue for a free workernode to start on.

          Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

          After your job was started, and ended, check the contents of the directory:

          $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

          Explore the contents of the 2 new files:

          $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

          These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

          "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

          In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

          The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

          • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

          • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

          • Time waiting in queue: queued jobs get a higher priority over time.

          • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

          • Whether or not you are a member of a Virtual Organisation (VO).

            Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

            If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

            See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

          Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

          It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

          Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

          "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

          To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

          By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

          module swap cluster/donphan\n

          Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

          To list the available cluster modules, you can use the module avail cluster/ command:

          $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

          As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

          The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

          "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

          It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

          To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

          Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

          env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

          We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

          We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

          "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

          Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

          qstat 12345\n

          To show on which compute nodes your job is running, at least, when it is running:

          qstat -n 12345\n

          To remove a job from the queue so that it will not run, or to stop a job that is already running.

          qdel 12345\n

          When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

          $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

          Here:

          Job ID the job's unique identifier

          Name the name of the job

          User the user that owns the job

          Time Use the elapsed walltime for the job

          Queue the queue the job is in

          The state S can be any of the following:

          State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

          User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

          "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

          There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

          "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

          Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

          It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

          "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

          The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

          qsub -l walltime=2:30:00 ...\n

          For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

          The maximum walltime for HPC-UGent clusters is 72 hours.

          If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

          qsub -l mem=4gb ...\n

          The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

          The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

          qsub -l nodes=5:ppn=2 ...\n

          The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

          qsub -l nodes=1:westmere\n

          The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

          These options can either be specified on the command line, e.g.

          qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

          or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          Note that the resources requested on the command line will override those specified in the PBS file.

          "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

          At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

          When you navigate to that directory and list its contents, you should see them:

          $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

          In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

          Inspect the generated output and error files:

          $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
          "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

          You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

          #PBS -m b \n#PBS -m e \n#PBS -m a\n

          or

          #PBS -m abe\n

          These options can also be specified on the command line. Try it and see what happens:

          qsub -m abe fibo.pbs\n

          The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

          qsub -m b -M john.smith@example.com fibo.pbs\n

          will send an e-mail to john.smith@example.com when the job begins.

          "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

          If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

          So the following example might go wrong:

          $ qsub job1.sh\n$ qsub job2.sh\n

          You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

          $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

          afterok means \"After OK\", or in other words, after the first job successfully completed.

          It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

          1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

          "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

          Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

          Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

          Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

          The syntax for qsub for submitting an interactive PBS job is:

          $ qsub -I <... pbs directives ...>\n
          "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

          Tip

          Find the code in \"~/examples/Running_interactive_jobs\"

          First of all, in order to know on which computer you're working, enter:

          $ hostname -f\ngligar07.gastly.os\n

          This means that you're now working on the login node gligar07.gastly.os of the cluster.

          The most basic way to start an interactive job is the following:

          $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

          There are two things of note here.

          1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

          2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

          In order to know on which compute-node you're working, enter again:

          $ hostname -f\nnode3501.doduo.gent.vsc\n

          Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

          Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

          $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

          You can exit the interactive session with:

          $ exit\n

          Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

          You can work for 3 hours by:

          qsub -I -l walltime=03:00:00\n

          If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

          "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

          To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

          An X Window server is packaged by default on most Linux distributions. If you have a graphical user interface this generally means that you are using an X Window server.

          The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

          "}, {"location": "running_interactive_jobs/#connect-with-x-forwarding", "title": "Connect with X-forwarding", "text": "

          In order to get the graphical output of your application (which is running on a compute node on the HPC) transferred to your personal screen, you will need to reconnect to the HPC with X-forwarding enabled, which is done with the \"-X\" option.

          First exit and reconnect to the HPC with X-forwarding enabled:

          $ exit\n$ ssh -X vsc40000@login.hpc.ugent.be\n$ hostname -f\ngligar07.gastly.os\n

          We first check whether our GUIs on the login node are decently forwarded to your screen on your local machine. An easy way to test it is by running a small X-application on the login node. Type:

          $ xclock\n

          And you should see a clock appearing on your screen.

          You can close your clock and connect further to a compute node with again your X-forwarding enabled:

          $ qsub -I -X\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n$ hostname -f\nnode3501.doduo.gent.vsc\n$ xclock\n

          and you should see your clock again.

          "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

          We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

          Now run the message program:

          cd ~/examples/Running_interactive_jobs\n./message.py\n

          You should see the following message appearing.

          Click any button and see what happens.

          -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
          "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

          You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

          "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

          First go to the directory:

          cd ~/examples/Running_jobs_with_input_output_data\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          ```

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

          List and check the contents with:

          $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

          Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

          file1.py
          #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

          The code of the Python script, is self explanatory:

          1. In step 1, we write something to the file hello.txt in the current directory.

          2. In step 2, we write some text to stdout.

          3. In step 3, we write to stderr.

          Check the contents of the first job script:

          file1a.pbs
          #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

          You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

          Submit it:

          qsub file1a.pbs\n

          After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

          $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

          Some observations:

          1. The file Hello.txt was created in the current directory.

          2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

          3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

          Inspect their contents ...\u00a0and remove the files

          $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

          Tip

          Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

          "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

          Check the contents of the job script and execute it.

          file1b.pbs
          #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

          Inspect the contents again ...\u00a0and remove the generated files:

          $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

          Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

          "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

          You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

          file1c.pbs
          #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
          "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

          The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

          Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

          The following locations are available:

          Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

          Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

          We elaborate more on the specific function of these locations in the following sections.

          Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

          For documentation about VO directories, see the section on VO directories.

          "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

          Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

          The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

          The operating system also creates a few files and folders here to manage your account. Examples are:

          File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

          In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

          The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

          If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

          "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

          To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

          You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

          Each type of scratch has its own use:

          Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

          Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

          Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

          Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

          "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

          In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

          $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

          Now you should be able to access your files running

          $ ls /UGent/yourugentusername\nhome shares www\n

          Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

          If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

          kinit yourugentusername@UGENT.BE -r 7\n

          You can verify your authentication ticket and expiry dates yourself by running klist

          $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

          Your ticket is valid for 10 hours, but you can renew it before it expires.

          To renew your tickets, simply run

          kinit -R\n

          If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

          krenew -b -K 60\n

          Each hour the process will check if your ticket should be renewed.

          We strongly advise to disable access to your shares once it is no longer needed:

          kdestroy\n

          If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

          conda deactivate\n
          "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

          In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

          $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

          Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

          $ ssh globus01 destroy\nSuccesfully destroyed session\n
          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

          Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

          To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

          The rules are:

          1. You will only receive a warning when you have reached the soft limit of either quota.

          2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

          3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

          We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

          "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

          Tip

          Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

          In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

          1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

          2. repeat this action 30.000 times;

          3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

          $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

          Tip

          Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

          In this exercise, you will

          1. Generate the file \"primes_1.txt\" again as in the previous exercise;

          2. open the file;

          3. read it line by line;

          4. calculate the average of primes in the line;

          5. count the number of primes found per line;

          6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job:

          $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

          The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

          • the amount of disk space; and

          • the number of files

          that can be made available to each individual HPC user.

          The quota of disk space and number of files for each HPC user is:

          Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

          Tip

          The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

          Tip

          If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

          "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

          You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

          VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

          To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

          Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

          $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

          This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

          If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

          $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

          If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

          $ du -s\n5632 .\n$ du -s -h\n

          If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

          $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

          Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

          $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
          "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

          Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

          Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

          To change the group of a directory and it's underlying directories and files, you can use:

          chgrp -R groupname directory\n
          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
          1. Get the group name you want to belong to.

          2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
          1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

          2. Fill out the group name. This cannot contain spaces.

          3. Put a description of your group in the \"Info\" field.

          4. You will now be a member and moderator of your newly created group.

          "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

          Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

          "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

          You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

          $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

          We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

          "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

          A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
          1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

          2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
          1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

          2. Fill why you want to request a VO.

          3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

          4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

          5. If the request is approved, you will now be a member and moderator of your newly created VO.

          "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

          If you're a moderator of a VO, you can request additional quota for the VO and its members.

          1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

          2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

          3. Add a comment explaining why you need additional storage space and submit the form.

          4. An HPC administrator will review your request and approve or deny it.

          "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

          VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

          Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

          1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

          2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

          "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

          When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

          VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

          VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

          If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

          "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

          A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

          "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

          This section will explain how to create, activate, use and deactivate Python virtual environments.

          "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

          A Python virtual environment can be created with the following command:

          python -m venv myenv      # Create a new virtual environment named 'myenv'\n

          This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

          Warning

          When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

          "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

          To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

          source myenv/bin/activate                    # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

          After activating the virtual environment, you can install additional Python packages with pip install:

          pip install example_package1\npip install example_package2\n

          These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

          It is now possible to run Python scripts that use the installed packages in the virtual environment.

          Tip

          When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

          Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

          To check if a package is available as a module, use:

          module av package_name\n

          Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

          module show module_name\n

          to check which extensions are included in a module (if any).

          "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

          Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

          example.py
          import example_package1\nimport example_package2\n...\n
          python example.py\n
          "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

          When you are done using the virtual environment, you can deactivate it. To do that, run:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

          You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

          pytorch_poutyne.py
          import torch\nimport poutyne\n\n...\n

          We load a PyTorch package as a module and install Poutyne in a virtual environment:

          module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

          While the virtual environment is activated, we can run the script without any issues:

          python pytorch_poutyne.py\n

          Deactivate the virtual environment when you are done:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

          To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

          module swap cluster/donphan\nqsub -I\n

          After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

          Naming a virtual environment

          When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

          python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
          "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

          This section will combine the concepts discussed in the previous sections to:

          1. Create a virtual environment on a specific cluster.
          2. Combine packages installed in the virtual environment with modules.
          3. Submit a job script that uses the virtual environment.

          The example script that we will run is the following:

          pytorch_poutyne.py
          import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

          First, we create a virtual environment on the donphan cluster:

          module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

          Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

          jobscript.pbs
          #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

          Next, we submit the job script:

          qsub jobscript.pbs\n

          Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

          "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

          Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

          For example, if we create a virtual environment on the skitty cluster,

          module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

          return to the login node by pressing CTRL+D and try to use the virtual environment:

          $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

          we are presented with the illegal instruction error. More info on this here

          "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

          When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

          python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

          Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

          "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

          There are two main reasons why this error could occur.

          1. You have not loaded the Python module that was used to create the virtual environment.
          2. You loaded or unloaded modules while the virtual environment was activated.
          "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

          If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

          The following commands illustrate this issue:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

          module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

          You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          The solution is to only modify modules when not in a virtual environment.

          "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

          Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

          This documentation only covers aspects of using Singularity on the infrastructure.

          "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

          The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via .

          "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

          Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

          When you make Singularity or convert Docker images you have some restrictions:

          • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
          "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          ::: prompt :::

          Create a job script like:

          Create an example myscript.sh:

          ::: code bash #!/bin/bash

          # prime factors factor 1234567 :::

          "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          ::: prompt :::

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before singularity execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          ::: prompt :::

          For example to compile an MPI example:

          ::: prompt :::

          Example MPI job script:

          "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

          The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

          As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

          In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

          In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

          • Title and nickname
          • Start and end date for your course or training
          • VSC-ids of all teachers/trainers
          • Participants based on UGent Course Code and/or list of VSC-ids
          • Optional information
            • Additional storage requirements
              • Shared folder
              • Groups folder for collaboration
              • Quota
            • Reservation for resource requirements beyond the interactive cluster
            • Ticket number for specific software needed for your course/training
            • Details for a custom Interactive Application in the webportal

          In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

          Please make these requests well in advance, several weeks before the start of your course/workshop.

          "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

          The title of the course or training can be used in e.g. reporting.

          The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

          When choosing the nickname, try to make it unique, but this is not enforced nor checked.

          "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

          The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

          The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

          • Course group and subgroups will be deactivated
          • Residual data in the course directories will be archived or deleted
          • Custom Interactive Applications will be disabled
          "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

          A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

          This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

          Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

          "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

          The management of the list of students or participants depends if this is a UGent course or a training/workshop.

          "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

          Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

          The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

          Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

          A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

          "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

          (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

          "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

          For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

          This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

          Every course directory will always contain the folders:

          • input
            • ideally suited to distribute input data such as common datasets
            • moderators have read/write access
            • group members (students) only have read access
          • members
            • this directory contains a personal folder for every student in your course members/vsc<01234>
            • only this specific VSC-id will have read/write access to this folder
            • moderators have read access to this folder
          "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

          Optionally, we can also create these folders:

          • shared
            • this is a folder for sharing files between any and all group members
            • all group members and moderators have read/write access
            • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
          • groups
            • a number of groups/group_<01> folders are created under the groups folder
            • these folders are suitable if you want to let your students collaborate closely in smaller groups
            • each of these group_<01> folders are owned by a dedicated group
            • teachers are automatically made moderators of these dedicated groups
            • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
            • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

          If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

          • shared: yes
          • subgroups: <number of (sub)groups>
          "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

          There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

          • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
          • member quota (defaults 5 GB volume and 10k files) are per student/participant

          The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

          "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

          The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

          "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

          We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

          Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

          Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

          Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

          "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

          In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

          We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

          Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

          "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

          HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

          A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

          If you would like this for your course, provide more details in your teaching request, including:

          • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

          • which cluster you want to use

          • how many nodes/cores/GPUs are needed

          • which software modules you are loading

          • custom code you are launching (e.g. autostart a GUI)

          • required environment variables that you are setting

          • ...

          We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

          A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

          "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

          Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

          "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

          Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

          "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

          Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

          "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

          Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

          For example:

          $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

          "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

          Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

          Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

          The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

          example.sh:

          #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

          Running the following command:

          $ qsub --dryrun example.sh -N example\n

          will generate this output:

          Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

          With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

          Similarly to the --dryrun example, we start by running the following command:

          $ qsub --debug example.sh -N example\n

          which generates this output:

          DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
          The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

          "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

          Below is a list of the most common and useful directives.

          Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

          TORQUE-related environment variables in batch job scripts.

          # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

          IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

          When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

          Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

          Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

          "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

          When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

          To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

          Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

          Other reasons why using more cores may not lead to a (significant) speedup include:

          • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

          • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

          • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

          • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

          • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

          • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

          More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

          "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

          When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

          Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

          Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

          Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

          An example of how you can make beneficial use of multiple nodes can be found here.

          You can also use MPI in Python, some useful packages that are also available on the HPC are:

          • mpi4py
          • Boost.MPI

          We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

          "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

          If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

          If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

          "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

          If you get from your job output an error message similar to this:

          =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

          This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

          "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

          Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

          Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

          "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

          If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

          If you have errors that look like:

          vsc40000@login.hpc.ugent.be: Permission denied\n

          or you are experiencing problems with connecting, here is a list of things to do that should help:

          1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

          2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

          3. Your SSH private key may not be in the default location ($HOME/.ssh/id_rsa). There are several ways to deal with this (using one of these is sufficient):

            1. Use the ssh -i (see section Connect) OR;
            2. Use ssh-add (see section Using an SSH agent) OR;
            3. Specify the location of the key in $HOME/.ssh/config. You will need to replace the VSC login id in the User field with your own:
              Host hpcugent\n    Hostname login.hpc.ugent.be\n    IdentityFile /path/to/private/key\n    User vsc40000\n
              Now you can connect with ssh hpcugent.
          4. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

          5. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

          6. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

          7. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

          8. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

          If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

          Please add -vvv as a flag to ssh like:

          ssh -vvv vsc40000@login.hpc.ugent.be\n

          and include the output of that command in the message.

          "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

          If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \n@     WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! \nSomeone could be\neavesdropping on you right now (man-in-the-middle attack)! \nIt is also possible that a host key has just been changed. \nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:1MNKFTfl1T9sm6tTWAo4sn7zyEfiWFLKbk/mlT+7S5s. \nPlease contact your system administrator. \nAdd correct host key in \u00a0~/.ssh/known_hosts to get rid of this message. \nOffending ECDSA key in \u00a0~/.ssh/known_hosts:21\nECDSA host key for login.hpc.ugent.be has changed and you have requested strict checking.\nHost key verification failed.\n

          You will need to remove the line it's complaining about (in the example, line 21). To do that, open ~/.ssh/known_hosts in an editor, and remove the line. This results in ssh \"forgetting\" the system you are connecting to.

          Alternatively you can use the command that might be shown by the warning under remove with: and it should be something like this:

          ssh-keygen -f \"~/.ssh/known_hosts\" -R \"login.hpc.ugent.be\"\n

          If the command is not shown, take the file from the \"Offending ECDSA key in\", and the host name from \"ECDSA host key for\" lines.

          After you've done that, you'll need to connect to the HPC again. See Warning message when first connecting to new host to verify the fingerprints.

          "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

          If you get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          or

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

          It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

          "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "
          $ ssh vsc40000@login.hpc.ugent.be\nThe authenticity of host login.hpc.ugent.be (<IP-adress>) can't be established. \n<algorithm> key fingerprint is <hash>\nAre you sure you want to continue connecting (yes/no)?\n

          Now you can check the authenticity by checking if the line that is at the place of the underlined piece of text matches one of the following lines:

          RSA key fingerprint is 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\nRSA key fingerprint is SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\nECDSA key fingerprint is e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\nECDSA key fingerprint is SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\nED25519 key fingerprint is 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\nED25519 key fingerprint is SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n

          If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

          "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

          To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

          Note

          Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

          "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

          If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

          Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

          You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

          "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

          See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

          "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

          Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

          $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

          This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

          To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

          As a rule of thumb, toolchains in the same row are compatible with each other:

          GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

          Example

          we could load the following modules together:

          ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

          Another common error is:

          $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

          This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

          "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

          When running software provided through modules (see Modules), you may run into errors like:

          $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

          or errors like:

          $ python\nIllegal instruction\n

          When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

          If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

          If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

          $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

          This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

          "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

          When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

          $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

          When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

          The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

          Tip

          To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

          $ module swap env/slurm/donphan \n
          instead of
          $ module swap cluster/donphan \n

          We recommend using a module swap cluster command after submitting the jobs.

          This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

          "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

          All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

          vsc40000@ln01[203] $\n

          When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

          Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

          Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

          $ echo This is a test\nThis is a test\n

          Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

          More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

          $ ls --help \n$ man ls\n$ info ls\n

          (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

          "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

          In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

          Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

          echo \"Hello! This is my hostname:\" \nhostname\n

          You can type both lines at your shell prompt, and the result will be the following:

          $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

          Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

          nano foo\n

          or use the following commands:

          echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

          The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

          $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

          Congratulations, you just created and started your first shell script!

          A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

          You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

          $ which bash\n/bin/bash\n

          We edit our script and change it with this information:

          #!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

          Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

          Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

          chmod +x foo\n

          Now you can start your script by simply executing it:

          $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

          The same technique can be used for all other scripting languages, like Perl and Python.

          Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

          "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

          The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

          Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

          To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

          Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

          Through this web portal, you can:

          • browse through the files & directories in your VSC account, and inspect, manage or change them;

          • consult active jobs (across all HPC-UGent Tier-2 clusters);

          • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

          • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

          • open a terminal session directly in your web browser;

          More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

          "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

          All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

          "}, {"location": "web_portal/#login", "title": "Login", "text": "

          When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

          "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

          The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

          Please click \"Authorize\" here.

          This request will only be made once, you should not see this again afterwards.

          "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

          Once logged in, you should see this start page:

          This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

          If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

          "}, {"location": "web_portal/#features", "title": "Features", "text": "

          We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

          "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

          Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

          The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

          Here you can:

          • Click a directory in the tree view on the left to open it;

          • Use the buttons on the top to:

            • go to a specific subdirectory by typing in the path (via Go To...);

            • open the current directory in a terminal (shell) session (via Open in Terminal);

            • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

            • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

            • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

            • show the owner and permissions in the file listing (via Show Owner/Mode);

          • Double-click a directory in the file listing to open that directory;

          • Select one or more files and/or directories in the file listing, and:

            • use the View button to see the contents (use the button at the top right to close the resulting popup window);

            • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

            • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

            • use the Download button to download the selected files and directories from your VSC account to your local workstation;

            • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

            • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

            • use the Delete button to (permanently!) remove the selected files and directories;

          For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

          "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

          Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

          For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

          "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

          To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

          A new browser tab will be opened that shows all your current queued and/or running jobs:

          You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

          Jobs that are still queued or running can be deleted using the red button on the right.

          Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

          For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

          "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

          To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

          This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

          You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

          Don't forget to actually submit your job to the system via the green Submit button!

          "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

          In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

          "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

          Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

          Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

          To exit the shell session, type exit followed by Enter and then close the browser tab.

          Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

          "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

          To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

          You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

          Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

          To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

          "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

          See dedicated page on Jupyter notebooks

          "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

          In case of problems with the web portal, it could help to restart the web server running in your VSC account.

          You can do this via the Restart Web Server button under the Help menu item:

          Of course, this only affects your own web portal session (not those of others).

          "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
          • ABAQUS for CAE course
          "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

          X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

          1. A graphical remote desktop that works well over low bandwidth connections.

          2. Copy/paste support from client to server and vice-versa.

          3. File sharing from client to server.

          4. Support for sound.

          5. Printer sharing from client to server.

          6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

          "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

          X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

          X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

          "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

          After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

          There are two ways to connect to the login node:

          • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

          • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

          "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

          This is the easier way to setup X2Go, a direct connection to the login node.

          1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

          2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

          3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

          4. Set the SSH port (22 by default).

          5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

            1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

          6. Check \"Try autologin\" option.

          7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

            1. [optional]: Set a single application like Terminal instead of XFCE desktop.

          8. [optional]: Change the session icon.

          9. Click the OK button after these changes.

          "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

          This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

          1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

          2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

          3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

            1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

            2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

            3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

            4. Click the OK button after these changes.

          "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

          Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

          X2Go will keep the session open for you (but only if the login node is not rebooted).

          "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

          If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

          hostname\n

          This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

          "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

          If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

          "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

          The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

          To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

          Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

          After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

          Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

          "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

          TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

          Loads MNIST datasets and trains a neural network to recognize hand-written digits.

          Runtime: ~1 min. on 8 cores (Intel Skylake)

          See https://www.tensorflow.org/tutorials/quickstart/beginner

          "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

          Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

          These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

          The guide aims to make you familiar with the Linux command line environment quickly.

          The tutorial goes through the following steps:

          1. Getting Started
          2. Navigating
          3. Manipulating files and directories
          4. Uploading files
          5. Beyond the basics

          Do not forget Common pitfalls, as this can save you some troubleshooting.

          "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
          • More on the HPC infrastructure.
          • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
          "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

          Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

          "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

          To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

          First, it's important to make a distinction between two different output channels:

          1. stdout: standard output channel, for regular output

          2. stderr: standard error channel, for errors and warnings

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

          > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

          $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

          >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

          $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

          < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

          One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

          $ find . -name .txt > files\n$ xargs grep banana < files\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

          To redirect the stderr output (warnings, messages), you can use 2>, just like >

          $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

          To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

          $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

          Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

          $ ls | wc -l\n    42\n

          A common pattern is to pipe the output of a command to less so you can examine or search the output:

          $ find . | less\n

          Or to look through your command history:

          $ history | less\n

          You can put multiple pipes in the same line. For example, which cp commands have we run?

          $ history | grep cp | less\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

          The shell will expand certain things, including:

          1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

          2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

          3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

          4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

          "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

          ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

          $ ps -fu $USER\n

          To see all the processes:

          $ ps -elf\n

          To see all the processes in a forest view, use:

          $ ps auxf\n

          The last two will spit out a lot of data, so get in the habit of piping it to less.

          pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

          pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

          "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

          ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

          $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

          Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

          $ kill -9 1234\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

          top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

          To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

          There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

          To exit top, use q (for 'quit').

          For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

          "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

          ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

          $ ulimit -a\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

          To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

          $ wc example.txt\n      90     468     3189   example.txt\n

          The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

          To only count the number of lines, use wc -l:

          $ wc -l example.txt\n      90    example.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

          grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

          $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

          grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

          "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

          cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

          $ cut -f 1 -d ',' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

          sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

          $ sed 's/oldtext/newtext/g' myfile.txt\n

          By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

          "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

          awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

          First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

          $ awk '{print $4}' mydata.dat\n

          You can use -F ':' to change the delimiter (F for field separator).

          The next example is used to sum numbers from a field:

          $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

          The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

          However, there are some rules you need to abide by.

          Here is a very detailed guide should you need more information.

          "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

          The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

          #!/bin/sh\n
          #!/bin/bash\n
          #!/usr/bin/env bash\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

          Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

          if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
          Or only if a certain variable is bigger than one:
          if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
          Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

          In the initial example, we used -d to test if a directory existed. There are several more checks.

          Another useful example, is to test if a variable contains a value (so it's not empty):

          if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

          the -z will check if the length of the variable's value is greater than zero.

          "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

          Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

          Let's look at a simple example:

          for i in 1 2 3\ndo\necho $i\ndone\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

          Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

          CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

          In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

          "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

          Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

          Firstly a useful thing to know for debugging and testing is that you can run any command like this:

          command 2>&1 output.log   # one single output file, both output and errors\n

          If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

          If you want regular and error output separated you can use:

          command > output.log 2> output.err  # errors in a separate file\n

          this will write regular output to output.log and error output to output.err.

          You can then look for the errors with less or search for specific text with grep.

          In scripts, you can use:

          set -e\n

          This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

          "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

          Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

          command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

          If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

          Examples include:

          • modifying your $PS1 (to tweak your shell prompt)

          • printing information about the current/jobs environment (echoing environment variables, etc.)

          • selecting a specific cluster to run on with module swap cluster/...

          Some recommendations:

          • Avoid using module load statements in your $HOME/.bashrc file

          • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

          When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

          "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
          "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

          The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

          This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

          #PBS -l nodes=1:ppn=1 # single-core\n

          For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

          #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

          We intend to submit it on the long queue:

          #PBS -q long\n

          We request a total running time of 48 hours (2 days).

          #PBS -l walltime=48:00:00\n

          We specify a desired name of our job:

          #PBS -N FreeSurfer_per_subject-time-longitudinal\n
          This specifies mail options:
          #PBS -m abe\n

          1. a means mail is sent when the job is aborted.

          2. b means mail is sent when the job begins.

          3. e means mail is sent when the job ends.

          Joins error output with regular output:

          #PBS -j oe\n

          All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
          1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

          2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

          3. How many files and directories are in /tmp?

          4. What's the name of the 5th file/directory in alphabetical order in /tmp?

          5. List all files that start with t in /tmp.

          6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

          7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

          "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

          This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

          "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

          If you receive an error message which contains something like the following:

          No such file or directory\n

          It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

          Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

          "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

          Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

          $ cat some file\nNo such file or directory 'some'\n

          Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

          $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

          This is especially error-prone if you are piping results of find:

          $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

          This can be worked around using the -print0 flag:

          $ find . -type f -print0 | xargs -0 cat\n...\n

          But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

          "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

          If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

          $ rm -r ~/$PROJETC/*\n

          "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

          A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

          $ #rm -r ~/$POROJETC/*\n
          Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

          "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
          $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

          Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

          $ chmod +x script_name.sh\n

          "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

          If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

          If you need help about a certain command, you should consult its so-called \"man page\":

          $ man command\n

          This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

          Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

          "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
          1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

          2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

          3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

          4. basic shell usage

          5. Bash for beginners

          6. MOOC

          Please don't hesitate to contact in case of questions or problems.

          "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

          To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

          You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

          Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

          "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

          To get help:

          1. use the documentation available on the system, through the help, info and man commands (use q to exit).
            help cd \ninfo ls \nman cp \n
          2. use Google

          3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

          "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

          Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

          "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

          The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

          You use the shell by executing commands, and hitting <enter>. For example:

          $ echo hello \nhello \n

          You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

          To go through previous commands, use <up> and <down>, rather than retyping them.

          "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

          A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

          $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

          "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

          If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

          "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

          At the prompt we also have access to shell variables, which have both a name and a value.

          They can be thought of as placeholders for things we need to remember.

          For example, to print the path to your home directory, we can use the shell variable named HOME:

          $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

          This prints the value of this variable.

          "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

          There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

          For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

          $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

          You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

          $ env | sort | grep VSC\n

          But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

          $ export MYVARIABLE=\"value\"\n

          It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

          If we then do

          $ echo $MYVARIABLE\n

          this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

          "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

          You can change what your prompt looks like by redefining the special-purpose variable $PS1.

          For example: to include the current location in your prompt:

          $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

          Note that ~ is short representation of your home directory.

          To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

          $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

          "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

          One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

          This may lead to surprising results, for example:

          $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

          To understand what's going on here, see the section on cd below.

          The moral here is: be very careful to not use empty variables unintentionally.

          Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

          The -e option will result in the script getting stopped if any command fails.

          The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

          More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

          "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

          If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

          "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

          Basic information about the system you are logged into can be obtained in a variety of ways.

          We limit ourselves to determining the hostname:

          $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

          And querying some basic information about the Linux kernel:

          $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

          "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
          • Print the full path to your home directory
          • Determine the name of the environment variable to your personal scratch directory
          • What's the name of the system you\\'re logged into? Is it the same for everyone?
          • Figure out how to print the value of a variable without including a newline
          • How do you get help on using the man command?

          Next chapter teaches you on how to navigate.

          "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

          Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

          If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

          Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

          To figure out where your quota is being spent, the du (isk sage) command can come in useful:

          $ du -sh test\n59M test\n

          Do not (frequently) run du on directories where large amounts of data are stored, since that will:

          1. take a long time

          2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

          Software is provided through so-called environment modules.

          The most commonly used commands are:

          1. module avail: show all available modules

          2. module avail <software name>: show available modules for a specific software name

          3. module list: show list of loaded modules

          4. module load <module name>: load a particular module

          More information is available in section Modules.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

          The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

          Detailed information is available in section submitting your job.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

          Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

          Hint: python -c \"print(sum(range(1, 101)))\"

          • How many modules are available for Python version 3.6.4?
          • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
          • Which cluster modules are available?

          • What's the full path to your personal home/data/scratch directories?

          • Determine how large your personal directories are.
          • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

          Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

          To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

          $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

          To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
          $ cp source target\n

          This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

          $ cp -r sourceDirectory target\n

          A last more complicated example:

          $ cp -a sourceDirectory target\n

          Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
          $ mkdir directory\n

          which will create a directory with the given name inside the current directory.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
          $ mv source target\n

          mv will move the source path to the destination path. Works for both directories as files.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

          Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

          $ rm filename\n
          rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

          You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

          $ rmdir directory\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

          Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

          1. User - a particular user (account)

          2. Group - a particular group of users (may be user-specific group with only one member)

          3. Other - other users in the system

          The permission types are:

          1. Read - For files, this gives permission to read the contents of a file

          2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

          3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

          Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

          $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

          The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

          Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

          $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

          You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

          You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

          However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

          $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          This will give the user otheruser permissions to write to Project_GoldenDragon

          Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

          Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

          See https://linux.die.net/man/1/setfacl for more information.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

          Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

          $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

          Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

          $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

          Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

          $ unzip myfile.zip\n

          If we would like to make our own zip archive, we use zip:

          $ zip myfiles.zip myfile1 myfile2 myfile3\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

          Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

          You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

          $ tar -xf tarfile.tar\n

          Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

          $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

          Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

          # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

          If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

          $ tar -c source1 source2 source3 -f tarfile.tar\n
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
          1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

          2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

          3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

          4. Remove the another/test directory with a single command.

          5. Rename test to test2. Move test2/hostname.txt to your home directory.

          6. Change the permission of test2 so only you can access it.

          7. Create an empty job script named job.sh, and make it executable.

          8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

          The next chapter is on uploading files, especially important when using HPC-infrastructure.

          "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

          This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

          "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

          To print the current directory, use pwd or \\$PWD:

          $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

          "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

          A very basic and commonly used command is ls, which can be used to list files and directories.

          In its basic usage, it just prints the names of files and directories in the current directory. For example:

          $ ls\nafile.txt some_directory \n

          When provided an argument, it can be used to list the contents of a directory:

          $ ls some_directory \none.txt two.txt\n

          A couple of commonly used options include:

          • detailed listing using ls -l:

            $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • To print the size information in human-readable form, use the -h flag:

            $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • also listing hidden files using the -a flag:

            $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • ordering files by the most recent change using -rt:

            $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

          If you try to use ls on a file that doesn't exist, you will get a clear error message:

          $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
          "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

          To change to a different directory, you can use the cd command:

          $ cd some_directory\n

          To change back to the previous directory you were in, there's a shortcut: cd -

          Using cd without an argument results in returning back to your home directory:

          $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

          "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

          The file command can be used to inspect what type of file you're dealing with:

          $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
          "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

          An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

          Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

          A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

          Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

          There are two special relative paths worth mentioning:

          • . is a shorthand for the current directory
          • .. is a shorthand for the parent of the current directory

          You can also use .. when constructing relative paths, for example:

          $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
          "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

          Each file and directory has particular permissions set on it, which can be queried using ls -l.

          For example:

          $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

          The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

          1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
          2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
          3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
          4. the 3rd part r-- indicates that other users only have read permissions

          The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

          1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
          2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

          See also the chmod command later in this manual.

          "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

          find will crawl a series of directories and lists files matching given criteria.

          For example, to look for the file named one.txt:

          $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

          To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

          $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

          A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

          "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
          • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
          • When was your home directory created or last changed?
          • Determine the name of the last changed file in /tmp.
          • See how home directories are organised. Can you access the home directory of other users?

          The next chapter will teach you how to interact with files and directories.

          "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

          To transfer files from and to the HPC, see the section about transferring files of the HPC manual

          "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

          After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

          For example, you may see an error when submitting a job script that was edited on Windows:

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

          To fix this problem, you should run the dos2unix command on the file:

          $ dos2unix filename\n
          "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

          As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

          $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
          "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

          Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

          1. Open (\"Read\"): ^R

          2. Save (\"Write Out\"): ^O

          3. Exit: ^X

          More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

          "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

          rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

          You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

          For example, to copy a folder with lots of CSV files:

          $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

          will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

          The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

          To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

          To copy files to your local computer, you can also use rsync:

          $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
          This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

          See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

          "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
          1. Download the file /etc/hostname to your local computer.

          2. Upload a file to a subdirectory of your personal $VSC_DATA space.

          3. Create a file named hello.txt and edit it using nano.

          Now you have a basic understanding, see next chapter for some more in depth concepts.

          "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

          In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

          This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

          If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

          For software installation requests, please use the request form.

          "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

          donphan is the new debug/interactive cluster.

          It replaces slaking, which will be retired on Monday 22 May 2023.

          It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

          This cluster consists of 12 workernodes, each with:

          • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
          • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
          • ~738 GiB of RAM memory;
          • 1.6TB NVME local disk;
          • HDR-100 InfiniBand interconnect;
          • RHEL8 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/donphan\n

          You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

          "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

          The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

          Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

          "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

          The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

          "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

          By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

          The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

          The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

          "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

          Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

          This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

          Warning

          Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

          There are no strong security guarantees regarding data protection when using this shared GPU!

          "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

          gallade is the new large-memory cluster.

          It replaces kirlia, which will be retired on Monday 22 May 2023.

          This cluster consists of 12 workernodes, each with:

          • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
          • ~940 GiB of RAM memory;
          • 1.5TB NVME local disk;
          • HDR-100 InfiniBand interconnect;
          • RHEL8 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/gallade\n

          You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

          "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

          The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

          As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

          Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

          "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

          Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

          It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

          "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

          In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

          This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

          If you have any questions on using shinx, you can contact the HPC-UGent team.

          For software installation requests, please use the request form.

          "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

          shinx is a new CPU-only cluster.

          It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

          It is primarily for regular CPU compute use.

          This cluster consists of 48 workernodes, each with:

          • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
          • ~360 GiB of RAM memory;
          • 400GB local disk;
          • NDR-200 InfiniBand interconnect;
          • RHEL9 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/shinx\n

          You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

          "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

          The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

          Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

          "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

          The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

          The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

          "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

          As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

          Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

          "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
          • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
          "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

          As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

          module swap cluster/.shinx\n

          Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

          As such, we will have an extended pilot phase in 3 stages:

          "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
          • Minimal cluster to test software and nodes

            • Only 2 or 3 nodes available
            • FDR or EDR infiniband network
            • EL8 OS
          • Retirement of swalot cluster (as of 01 November 2023)

          • Racking of stage 1 nodes
          "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
          • 2/3 cluster size

            • 32 nodes (with max job size of 16 nodes)
            • EDR Infiniband
            • EL8 OS
          • Retirement of victini (as of 05 February 2023)

          • Racking of last 16 nodes
          • Installation of NDR/NDR-200 infiniband network
          "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
          • Full size cluster

            • 48 nodes (no job size limit)
            • NDR-200 Infiniband (single switch Infiniband topology)
            • EL9 OS
          • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

          "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
          • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
          "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

          For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

          module swap env/software/doduo\n

          We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

          "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

          This table gives an overview of all the available software on the different clusters.

          "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABAQUS, load one of these modules using a module load command like:

          module load ABAQUS/2023\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABINIT, load one of these modules using a module load command like:

          module load ABINIT/9.10.3-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABRA2, load one of these modules using a module load command like:

          module load ABRA2/2.23-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABRicate, load one of these modules using a module load command like:

          module load ABRicate/0.9.9-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABySS, load one of these modules using a module load command like:

          module load ABySS/2.3.7-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ACTC, load one of these modules using a module load command like:

          module load ACTC/1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ADMIXTURE, load one of these modules using a module load command like:

          module load ADMIXTURE/1.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AICSImageIO, load one of these modules using a module load command like:

          module load AICSImageIO/4.14.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMAPVox, load one of these modules using a module load command like:

          module load AMAPVox/1.9.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMICA, load one of these modules using a module load command like:

          module load AMICA/2024.1.19-intel-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMOS, load one of these modules using a module load command like:

          module load AMOS/3.1.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMPtk, load one of these modules using a module load command like:

          module load AMPtk/1.5.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ANTLR, load one of these modules using a module load command like:

          module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ANTs, load one of these modules using a module load command like:

          module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

          The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using APR-util, load one of these modules using a module load command like:

          module load APR-util/1.6.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using APR, load one of these modules using a module load command like:

          module load APR/1.7.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ARAGORN, load one of these modules using a module load command like:

          module load ARAGORN/1.2.41-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ASCAT, load one of these modules using a module load command like:

          module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ASE, load one of these modules using a module load command like:

          module load ASE/3.22.1-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ATK, load one of these modules using a module load command like:

          module load ATK/2.38.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AUGUSTUS, load one of these modules using a module load command like:

          module load AUGUSTUS/3.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Abseil, load one of these modules using a module load command like:

          module load Abseil/20230125.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AdapterRemoval, load one of these modules using a module load command like:

          module load AdapterRemoval/2.3.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Albumentations, load one of these modules using a module load command like:

          module load Albumentations/1.1.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AlphaFold, load one of these modules using a module load command like:

          module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AlphaPulldown, load one of these modules using a module load command like:

          module load AlphaPulldown/0.30.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Altair-EDEM, load one of these modules using a module load command like:

          module load Altair-EDEM/2021.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Amber, load one of these modules using a module load command like:

          module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AmberMini, load one of these modules using a module load command like:

          module load AmberMini/16.16.0-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AmberTools, load one of these modules using a module load command like:

          module load AmberTools/20-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Anaconda3, load one of these modules using a module load command like:

          module load Anaconda3/2023.03-1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Annocript, load one of these modules using a module load command like:

          module load Annocript/2.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ArchR, load one of these modules using a module load command like:

          module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Archive-Zip, load one of these modules using a module load command like:

          module load Archive-Zip/1.68-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Arlequin, load one of these modules using a module load command like:

          module load Arlequin/3.5.2.2-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Armadillo, load one of these modules using a module load command like:

          module load Armadillo/12.6.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Arrow, load one of these modules using a module load command like:

          module load Arrow/14.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ArviZ, load one of these modules using a module load command like:

          module load ArviZ/0.16.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Aspera-CLI, load one of these modules using a module load command like:

          module load Aspera-CLI/3.9.6.1467.159c5b1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoDock-Vina, load one of these modules using a module load command like:

          module load AutoDock-Vina/1.2.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoGeneS, load one of these modules using a module load command like:

          module load AutoGeneS/1.0.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoMap, load one of these modules using a module load command like:

          module load AutoMap/1.0-foss-2019b-20200324\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Autoconf, load one of these modules using a module load command like:

          module load Autoconf/2.71-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Automake, load one of these modules using a module load command like:

          module load Automake/1.16.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Autotools, load one of these modules using a module load command like:

          module load Autotools/20220317-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Avogadro2, load one of these modules using a module load command like:

          module load Avogadro2/1.97.0-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BAMSurgeon, load one of these modules using a module load command like:

          module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BBMap, load one of these modules using a module load command like:

          module load BBMap/39.01-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BCFtools, load one of these modules using a module load command like:

          module load BCFtools/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BDBag, load one of these modules using a module load command like:

          module load BDBag/1.6.3-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BEDOPS, load one of these modules using a module load command like:

          module load BEDOPS/2.4.41-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BEDTools, load one of these modules using a module load command like:

          module load BEDTools/2.31.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLAST+, load one of these modules using a module load command like:

          module load BLAST+/2.14.1-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLAT, load one of these modules using a module load command like:

          module load BLAT/3.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLIS, load one of these modules using a module load command like:

          module load BLIS/0.9.0-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BRAKER, load one of these modules using a module load command like:

          module load BRAKER/2.1.6-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BSMAPz, load one of these modules using a module load command like:

          module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BSseeker2, load one of these modules using a module load command like:

          module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BUSCO, load one of these modules using a module load command like:

          module load BUSCO/5.4.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BUStools, load one of these modules using a module load command like:

          module load BUStools/0.43.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BWA, load one of these modules using a module load command like:

          module load BWA/0.7.17-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BamTools, load one of these modules using a module load command like:

          module load BamTools/2.5.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bambi, load one of these modules using a module load command like:

          module load Bambi/0.7.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bandage, load one of these modules using a module load command like:

          module load Bandage/0.9.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BatMeth2, load one of these modules using a module load command like:

          module load BatMeth2/2.1-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayeScEnv, load one of these modules using a module load command like:

          module load BayeScEnv/1.1-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayeScan, load one of these modules using a module load command like:

          module load BayeScan/2.1-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayesAss3-SNPs, load one of these modules using a module load command like:

          module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayesPrism, load one of these modules using a module load command like:

          module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bazel, load one of these modules using a module load command like:

          module load Bazel/6.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Beast, load one of these modules using a module load command like:

          module load Beast/2.7.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BeautifulSoup, load one of these modules using a module load command like:

          module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BerkeleyGW, load one of these modules using a module load command like:

          module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BiG-SCAPE, load one of these modules using a module load command like:

          module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BigDFT, load one of these modules using a module load command like:

          module load BigDFT/1.9.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BinSanity, load one of these modules using a module load command like:

          module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-DB-HTS, load one of these modules using a module load command like:

          module load Bio-DB-HTS/3.01-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-EUtilities, load one of these modules using a module load command like:

          module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

          module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BioPerl, load one of these modules using a module load command like:

          module load BioPerl/1.7.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Biopython, load one of these modules using a module load command like:

          module load Biopython/1.83-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bismark, load one of these modules using a module load command like:

          module load Bismark/0.23.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bison, load one of these modules using a module load command like:

          module load Bison/3.8.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blender, load one of these modules using a module load command like:

          module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Block, load one of these modules using a module load command like:

          module load Block/1.5.3-20200525-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blosc, load one of these modules using a module load command like:

          module load Blosc/1.21.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blosc2, load one of these modules using a module load command like:

          module load Blosc2/2.6.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bonito, load one of these modules using a module load command like:

          module load Bonito/0.4.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bonnie++, load one of these modules using a module load command like:

          module load Bonnie++/2.00a-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.MPI, load one of these modules using a module load command like:

          module load Boost.MPI/1.81.0-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.Python-NumPy, load one of these modules using a module load command like:

          module load Boost.Python-NumPy/1.79.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.Python, load one of these modules using a module load command like:

          module load Boost.Python/1.79.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost, load one of these modules using a module load command like:

          module load Boost/1.82.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bottleneck, load one of these modules using a module load command like:

          module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bowtie, load one of these modules using a module load command like:

          module load Bowtie/1.3.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bowtie2, load one of these modules using a module load command like:

          module load Bowtie2/2.4.5-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bracken, load one of these modules using a module load command like:

          module load Bracken/2.9-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brotli-python, load one of these modules using a module load command like:

          module load Brotli-python/1.0.9-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brotli, load one of these modules using a module load command like:

          module load Brotli/1.1.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brunsli, load one of these modules using a module load command like:

          module load Brunsli/0.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CASPR, load one of these modules using a module load command like:

          module load CASPR/20200730-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CCL, load one of these modules using a module load command like:

          module load CCL/1.12.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CD-HIT, load one of these modules using a module load command like:

          module load CD-HIT/4.8.1-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDAT, load one of these modules using a module load command like:

          module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDBtools, load one of these modules using a module load command like:

          module load CDBtools/0.99-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDO, load one of these modules using a module load command like:

          module load CDO/2.0.5-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CENSO, load one of these modules using a module load command like:

          module load CENSO/1.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CESM-deps, load one of these modules using a module load command like:

          module load CESM-deps/2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CFDEMcoupling, load one of these modules using a module load command like:

          module load CFDEMcoupling/3.8.0-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CFITSIO, load one of these modules using a module load command like:

          module load CFITSIO/4.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CGAL, load one of these modules using a module load command like:

          module load CGAL/5.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CGmapTools, load one of these modules using a module load command like:

          module load CGmapTools/0.1.2-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRCexplorer2, load one of these modules using a module load command like:

          module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRI-long, load one of these modules using a module load command like:

          module load CIRI-long/1.0.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRIquant, load one of these modules using a module load command like:

          module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CITE-seq-Count, load one of these modules using a module load command like:

          module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CLEAR, load one of these modules using a module load command like:

          module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CLHEP, load one of these modules using a module load command like:

          module load CLHEP/2.4.6.4-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMAverse, load one of these modules using a module load command like:

          module load CMAverse/20220112-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMSeq, load one of these modules using a module load command like:

          module load CMSeq/1.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMake, load one of these modules using a module load command like:

          module load CMake/3.27.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using COLMAP, load one of these modules using a module load command like:

          module load COLMAP/3.8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CONCOCT, load one of these modules using a module load command like:

          module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CP2K, load one of these modules using a module load command like:

          module load CP2K/2023.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPC2, load one of these modules using a module load command like:

          module load CPC2/1.0.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPLEX, load one of these modules using a module load command like:

          module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPPE, load one of these modules using a module load command like:

          module load CPPE/0.3.1-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CREST, load one of these modules using a module load command like:

          module load CREST/2.12-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRISPR-DAV, load one of these modules using a module load command like:

          module load CRISPR-DAV/2.3.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRISPResso2, load one of these modules using a module load command like:

          module load CRISPResso2/2.2.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRYSTAL17, load one of these modules using a module load command like:

          module load CRYSTAL17/1.0.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CSBDeep, load one of these modules using a module load command like:

          module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUDA, load one of these modules using a module load command like:

          module load CUDA/12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUDAcore, load one of these modules using a module load command like:

          module load CUDAcore/11.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUnit, load one of these modules using a module load command like:

          module load CUnit/2.1-3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CVXOPT, load one of these modules using a module load command like:

          module load CVXOPT/1.3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Calib, load one of these modules using a module load command like:

          module load Calib/0.3.4-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cantera, load one of these modules using a module load command like:

          module load Cantera/3.0.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CapnProto, load one of these modules using a module load command like:

          module load CapnProto/1.0.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cartopy, load one of these modules using a module load command like:

          module load Cartopy/0.22.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Casanovo, load one of these modules using a module load command like:

          module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatBoost, load one of these modules using a module load command like:

          module load CatBoost/1.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatLearn, load one of these modules using a module load command like:

          module load CatLearn/0.6.2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatMAP, load one of these modules using a module load command like:

          module load CatMAP/20220519-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Catch2, load one of these modules using a module load command like:

          module load Catch2/2.13.9-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cbc, load one of these modules using a module load command like:

          module load Cbc/2.10.11-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellBender, load one of these modules using a module load command like:

          module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellOracle, load one of these modules using a module load command like:

          module load CellOracle/0.12.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellProfiler, load one of these modules using a module load command like:

          module load CellProfiler/4.2.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRanger-ATAC, load one of these modules using a module load command like:

          module load CellRanger-ATAC/2.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRanger, load one of these modules using a module load command like:

          module load CellRanger/7.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRank, load one of these modules using a module load command like:

          module load CellRank/2.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellTypist, load one of these modules using a module load command like:

          module load CellTypist/1.6.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cellpose, load one of these modules using a module load command like:

          module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Centrifuge, load one of these modules using a module load command like:

          module load Centrifuge/1.0.4-beta-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cereal, load one of these modules using a module load command like:

          module load Cereal/1.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ceres-Solver, load one of these modules using a module load command like:

          module load Ceres-Solver/2.2.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cgl, load one of these modules using a module load command like:

          module load Cgl/0.60.8-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CharLS, load one of these modules using a module load command like:

          module load CharLS/2.4.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CheMPS2, load one of these modules using a module load command like:

          module load CheMPS2/1.8.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Check, load one of these modules using a module load command like:

          module load Check/0.15.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CheckM, load one of these modules using a module load command like:

          module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Chimera, load one of these modules using a module load command like:

          module load Chimera/1.16-linux_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Circlator, load one of these modules using a module load command like:

          module load Circlator/1.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Circuitscape, load one of these modules using a module load command like:

          module load Circuitscape/5.12.3-Julia-1.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clair3, load one of these modules using a module load command like:

          module load Clair3/1.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clang, load one of these modules using a module load command like:

          module load Clang/16.0.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clp, load one of these modules using a module load command like:

          module load Clp/1.17.9-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clustal-Omega, load one of these modules using a module load command like:

          module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ClustalW2, load one of these modules using a module load command like:

          module load ClustalW2/2.1-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CmdStanR, load one of these modules using a module load command like:

          module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CodAn, load one of these modules using a module load command like:

          module load CodAn/1.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CoinUtils, load one of these modules using a module load command like:

          module load CoinUtils/2.11.10-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ColabFold, load one of these modules using a module load command like:

          module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CompareM, load one of these modules using a module load command like:

          module load CompareM/0.1.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

          module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Concorde, load one of these modules using a module load command like:

          module load Concorde/20031219-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CoordgenLibs, load one of these modules using a module load command like:

          module load CoordgenLibs/3.0.1-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CopyKAT, load one of these modules using a module load command like:

          module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Coreutils, load one of these modules using a module load command like:

          module load Coreutils/8.32-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CppUnit, load one of these modules using a module load command like:

          module load CppUnit/1.15.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CuPy, load one of these modules using a module load command like:

          module load CuPy/8.5.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cufflinks, load one of these modules using a module load command like:

          module load Cufflinks/20190706-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cython, load one of these modules using a module load command like:

          module load Cython/3.0.8-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DALI, load one of these modules using a module load command like:

          module load DALI/2.1.2-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DAS_Tool, load one of these modules using a module load command like:

          module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DB, load one of these modules using a module load command like:

          module load DB/18.1.40-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBD-mysql, load one of these modules using a module load command like:

          module load DBD-mysql/4.050-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBG2OLC, load one of these modules using a module load command like:

          module load DBG2OLC/20200724-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DB_File, load one of these modules using a module load command like:

          module load DB_File/1.858-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBus, load one of these modules using a module load command like:

          module load DBus/1.15.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DETONATE, load one of these modules using a module load command like:

          module load DETONATE/1.11-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DFT-D3, load one of these modules using a module load command like:

          module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIA-NN, load one of these modules using a module load command like:

          module load DIA-NN/1.8.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIALOGUE, load one of these modules using a module load command like:

          module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIAMOND, load one of these modules using a module load command like:

          module load DIAMOND/2.1.8-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIANA, load one of these modules using a module load command like:

          module load DIANA/10.5\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIRAC, load one of these modules using a module load command like:

          module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DL_POLY_Classic, load one of these modules using a module load command like:

          module load DL_POLY_Classic/1.10-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DMCfun, load one of these modules using a module load command like:

          module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DOLFIN, load one of these modules using a module load command like:

          module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DRAGMAP, load one of these modules using a module load command like:

          module load DRAGMAP/1.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DROP, load one of these modules using a module load command like:

          module load DROP/1.1.0-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DUBStepR, load one of these modules using a module load command like:

          module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dakota, load one of these modules using a module load command like:

          module load Dakota/6.16.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dalton, load one of these modules using a module load command like:

          module load Dalton/2020.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DeepLoc, load one of these modules using a module load command like:

          module load DeepLoc/2.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Delly, load one of these modules using a module load command like:

          module load Delly/0.8.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DendroPy, load one of these modules using a module load command like:

          module load DendroPy/4.6.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DensPart, load one of these modules using a module load command like:

          module load DensPart/20220603-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Deprecated, load one of these modules using a module load command like:

          module load Deprecated/1.2.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DiCE-ML, load one of these modules using a module load command like:

          module load DiCE-ML/0.9-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dice, load one of these modules using a module load command like:

          module load Dice/20240101-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DoubletFinder, load one of these modules using a module load command like:

          module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Doxygen, load one of these modules using a module load command like:

          module load Doxygen/1.9.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dsuite, load one of these modules using a module load command like:

          module load Dsuite/20210718-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DualSPHysics, load one of these modules using a module load command like:

          module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DyMat, load one of these modules using a module load command like:

          module load DyMat/0.7-foss-2021b-2020-12-12\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EDirect, load one of these modules using a module load command like:

          module load EDirect/20.5.20231006-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ELPA, load one of these modules using a module load command like:

          module load ELPA/2021.05.001-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EMBOSS, load one of these modules using a module load command like:

          module load EMBOSS/6.6.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESM-2, load one of these modules using a module load command like:

          module load ESM-2/2.0.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESMF, load one of these modules using a module load command like:

          module load ESMF/8.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESMPy, load one of these modules using a module load command like:

          module load ESMPy/8.0.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ETE, load one of these modules using a module load command like:

          module load ETE/3.1.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EUKulele, load one of these modules using a module load command like:

          module load EUKulele/2.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EasyBuild, load one of these modules using a module load command like:

          module load EasyBuild/4.9.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Eigen, load one of these modules using a module load command like:

          module load Eigen/3.4.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Elk, load one of these modules using a module load command like:

          module load Elk/7.0.12-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EpiSCORE, load one of these modules using a module load command like:

          module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

          module load Excel-Writer-XLSX/1.09-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Exonerate, load one of these modules using a module load command like:

          module load Exonerate/2.4.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ExtremeLy, load one of these modules using a module load command like:

          module load ExtremeLy/2.3.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FALCON, load one of these modules using a module load command like:

          module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FASTA, load one of these modules using a module load command like:

          module load FASTA/36.3.8i-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FASTX-Toolkit, load one of these modules using a module load command like:

          module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FDS, load one of these modules using a module load command like:

          module load FDS/6.8.0-intel-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FEniCS, load one of these modules using a module load command like:

          module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFAVES, load one of these modules using a module load command like:

          module load FFAVES/2022.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFC, load one of these modules using a module load command like:

          module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFTW.MPI, load one of these modules using a module load command like:

          module load FFTW.MPI/3.3.10-gompi-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFTW, load one of these modules using a module load command like:

          module load FFTW/3.3.10-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFmpeg, load one of these modules using a module load command like:

          module load FFmpeg/6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FIAT, load one of these modules using a module load command like:

          module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FIGARO, load one of these modules using a module load command like:

          module load FIGARO/1.1.2-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLAC, load one of these modules using a module load command like:

          module load FLAC/1.4.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLAIR, load one of these modules using a module load command like:

          module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLANN, load one of these modules using a module load command like:

          module load FLANN/1.9.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLASH, load one of these modules using a module load command like:

          module load FLASH/2.2.00-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLTK, load one of these modules using a module load command like:

          module load FLTK/1.3.5-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLUENT, load one of these modules using a module load command like:

          module load FLUENT/2023R1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FMM3D, load one of these modules using a module load command like:

          module load FMM3D/20211018-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FMPy, load one of these modules using a module load command like:

          module load FMPy/0.3.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FSL, load one of these modules using a module load command like:

          module load FSL/6.0.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FabIO, load one of these modules using a module load command like:

          module load FabIO/0.11.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Faiss, load one of these modules using a module load command like:

          module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastANI, load one of these modules using a module load command like:

          module load FastANI/1.34-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastME, load one of these modules using a module load command like:

          module load FastME/2.1.6.3-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastQC, load one of these modules using a module load command like:

          module load FastQC/0.11.9-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastQ_Screen, load one of these modules using a module load command like:

          module load FastQ_Screen/0.14.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastTree, load one of these modules using a module load command like:

          module load FastTree/2.1.11-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastViromeExplorer, load one of these modules using a module load command like:

          module load FastViromeExplorer/20180422-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fastaq, load one of these modules using a module load command like:

          module load Fastaq/3.17.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fiji, load one of these modules using a module load command like:

          module load Fiji/2.9.0-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Filtlong, load one of these modules using a module load command like:

          module load Filtlong/0.2.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fiona, load one of these modules using a module load command like:

          module load Fiona/1.9.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Flask, load one of these modules using a module load command like:

          module load Flask/2.2.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FlexiBLAS, load one of these modules using a module load command like:

          module load FlexiBLAS/3.3.1-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Flye, load one of these modules using a module load command like:

          module load Flye/2.9.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FragGeneScan, load one of these modules using a module load command like:

          module load FragGeneScan/1.31-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeBarcodes, load one of these modules using a module load command like:

          module load FreeBarcodes/3.0.a5-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeFEM, load one of these modules using a module load command like:

          module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeImage, load one of these modules using a module load command like:

          module load FreeImage/3.18.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeSurfer, load one of these modules using a module load command like:

          module load FreeSurfer/7.3.2-centos8_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeXL, load one of these modules using a module load command like:

          module load FreeXL/1.0.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FriBidi, load one of these modules using a module load command like:

          module load FriBidi/1.0.12-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FuSeq, load one of these modules using a module load command like:

          module load FuSeq/1.1.2-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FusionCatcher, load one of these modules using a module load command like:

          module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GAPPadder, load one of these modules using a module load command like:

          module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATB-Core, load one of these modules using a module load command like:

          module load GATB-Core/1.4.2-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATE, load one of these modules using a module load command like:

          module load GATE/9.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATK, load one of these modules using a module load command like:

          module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GBprocesS, load one of these modules using a module load command like:

          module load GBprocesS/4.0.0.post1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GCC, load one of these modules using a module load command like:

          module load GCC/13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GCCcore, load one of these modules using a module load command like:

          module load GCCcore/13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GConf, load one of these modules using a module load command like:

          module load GConf/3.2.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDAL, load one of these modules using a module load command like:

          module load GDAL/3.7.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDB, load one of these modules using a module load command like:

          module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDCM, load one of these modules using a module load command like:

          module load GDCM/3.0.21-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDGraph, load one of these modules using a module load command like:

          module load GDGraph/1.56-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDRCopy, load one of these modules using a module load command like:

          module load GDRCopy/2.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GEGL, load one of these modules using a module load command like:

          module load GEGL/0.4.30-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GEOS, load one of these modules using a module load command like:

          module load GEOS/3.12.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GFF3-toolkit, load one of these modules using a module load command like:

          module load GFF3-toolkit/2.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GIMP, load one of these modules using a module load command like:

          module load GIMP/2.10.24-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GL2PS, load one of these modules using a module load command like:

          module load GL2PS/1.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLFW, load one of these modules using a module load command like:

          module load GLFW/3.3.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLIMPSE, load one of these modules using a module load command like:

          module load GLIMPSE/2.0.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLM, load one of these modules using a module load command like:

          module load GLM/0.9.9.8-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLPK, load one of these modules using a module load command like:

          module load GLPK/5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLib, load one of these modules using a module load command like:

          module load GLib/2.77.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLibmm, load one of these modules using a module load command like:

          module load GLibmm/2.66.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GMAP-GSNAP, load one of these modules using a module load command like:

          module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GMP, load one of these modules using a module load command like:

          module load GMP/6.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GOATOOLS, load one of these modules using a module load command like:

          module load GOATOOLS/1.3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GObject-Introspection, load one of these modules using a module load command like:

          module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPAW-setups, load one of these modules using a module load command like:

          module load GPAW-setups/0.9.20000\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPAW, load one of these modules using a module load command like:

          module load GPAW/22.8.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPy, load one of these modules using a module load command like:

          module load GPy/1.10.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPyOpt, load one of these modules using a module load command like:

          module load GPyOpt/1.2.6-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPyTorch, load one of these modules using a module load command like:

          module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GRASP-suite, load one of these modules using a module load command like:

          module load GRASP-suite/2023-05-09-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GRASS, load one of these modules using a module load command like:

          module load GRASS/8.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GROMACS, load one of these modules using a module load command like:

          module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GSL, load one of these modules using a module load command like:

          module load GSL/2.7-intel-compilers-2021.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GST-plugins-bad, load one of these modules using a module load command like:

          module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GST-plugins-base, load one of these modules using a module load command like:

          module load GST-plugins-base/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GStreamer, load one of these modules using a module load command like:

          module load GStreamer/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTDB-Tk, load one of these modules using a module load command like:

          module load GTDB-Tk/2.3.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK+, load one of these modules using a module load command like:

          module load GTK+/3.24.23-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK2, load one of these modules using a module load command like:

          module load GTK2/2.24.33-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK3, load one of these modules using a module load command like:

          module load GTK3/3.24.37-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK4, load one of these modules using a module load command like:

          module load GTK4/4.7.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTS, load one of these modules using a module load command like:

          module load GTS/0.7.6-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GUSHR, load one of these modules using a module load command like:

          module load GUSHR/2020-09-28-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GapFiller, load one of these modules using a module load command like:

          module load GapFiller/2.1.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gaussian, load one of these modules using a module load command like:

          module load Gaussian/g16_C.01-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gblocks, load one of these modules using a module load command like:

          module load Gblocks/0.91b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gdk-Pixbuf, load one of these modules using a module load command like:

          module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Geant4, load one of these modules using a module load command like:

          module load Geant4/11.0.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GeneMark-ET, load one of these modules using a module load command like:

          module load GeneMark-ET/4.71-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GenomeThreader, load one of these modules using a module load command like:

          module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GenomeWorks, load one of these modules using a module load command like:

          module load GenomeWorks/2021.02.2-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gerris, load one of these modules using a module load command like:

          module load Gerris/20131206-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GetOrganelle, load one of these modules using a module load command like:

          module load GetOrganelle/1.7.5.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GffCompare, load one of these modules using a module load command like:

          module load GffCompare/0.12.6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ghostscript, load one of these modules using a module load command like:

          module load Ghostscript/10.01.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GimmeMotifs, load one of these modules using a module load command like:

          module load GimmeMotifs/0.17.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Giotto-Suite, load one of these modules using a module load command like:

          module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GitPython, load one of these modules using a module load command like:

          module load GitPython/3.1.40-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GlimmerHMM, load one of these modules using a module load command like:

          module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GlobalArrays, load one of these modules using a module load command like:

          module load GlobalArrays/5.8-iomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GnuTLS, load one of these modules using a module load command like:

          module load GnuTLS/3.7.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Go, load one of these modules using a module load command like:

          module load Go/1.21.6\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gradle, load one of these modules using a module load command like:

          module load Gradle/8.6-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphMap, load one of these modules using a module load command like:

          module load GraphMap/0.5.2-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphMap2, load one of these modules using a module load command like:

          module load GraphMap2/0.6.4-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Graphene, load one of these modules using a module load command like:

          module load Graphene/1.10.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphicsMagick, load one of these modules using a module load command like:

          module load GraphicsMagick/1.3.34-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Graphviz, load one of these modules using a module load command like:

          module load Graphviz/8.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Greenlet, load one of these modules using a module load command like:

          module load Greenlet/2.0.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GroIMP, load one of these modules using a module load command like:

          module load GroIMP/1.5-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Guile, load one of these modules using a module load command like:

          module load Guile/3.0.7-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Guppy, load one of these modules using a module load command like:

          module load Guppy/6.5.7-gpu\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gurobi, load one of these modules using a module load command like:

          module load Gurobi/11.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HAL, load one of these modules using a module load command like:

          module load HAL/2.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDBSCAN, load one of these modules using a module load command like:

          module load HDBSCAN/0.8.29-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDDM, load one of these modules using a module load command like:

          module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDF, load one of these modules using a module load command like:

          module load HDF/4.2.16-2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDF5, load one of these modules using a module load command like:

          module load HDF5/1.14.0-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HH-suite, load one of these modules using a module load command like:

          module load HH-suite/3.3.0-gompic-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HISAT2, load one of these modules using a module load command like:

          module load HISAT2/2.2.1-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HMMER, load one of these modules using a module load command like:

          module load HMMER/3.4-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HMMER2, load one of these modules using a module load command like:

          module load HMMER2/2.3.2-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HPL, load one of these modules using a module load command like:

          module load HPL/2.3-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSeq, load one of these modules using a module load command like:

          module load HTSeq/2.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSlib, load one of these modules using a module load command like:

          module load HTSlib/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSplotter, load one of these modules using a module load command like:

          module load HTSplotter/2.11-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hadoop, load one of these modules using a module load command like:

          module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HarfBuzz, load one of these modules using a module load command like:

          module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HiCExplorer, load one of these modules using a module load command like:

          module load HiCExplorer/3.7.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HiCMatrix, load one of these modules using a module load command like:

          module load HiCMatrix/17-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HighFive, load one of these modules using a module load command like:

          module load HighFive/2.7.1-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Highway, load one of these modules using a module load command like:

          module load Highway/1.0.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Horovod, load one of these modules using a module load command like:

          module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HyPo, load one of these modules using a module load command like:

          module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hybpiper, load one of these modules using a module load command like:

          module load Hybpiper/2.1.6-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hydra, load one of these modules using a module load command like:

          module load Hydra/1.1.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hyperopt, load one of these modules using a module load command like:

          module load Hyperopt/0.2.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hypre, load one of these modules using a module load command like:

          module load Hypre/2.25.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ICU, load one of these modules using a module load command like:

          module load ICU/73.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IDBA-UD, load one of these modules using a module load command like:

          module load IDBA-UD/1.1.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IGMPlot, load one of these modules using a module load command like:

          module load IGMPlot/2.4.2-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IGV, load one of these modules using a module load command like:

          module load IGV/2.9.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IOR, load one of these modules using a module load command like:

          module load IOR/3.2.1-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IPython, load one of these modules using a module load command like:

          module load IPython/8.14.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IQ-TREE, load one of these modules using a module load command like:

          module load IQ-TREE/2.2.2.6-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IRkernel, load one of these modules using a module load command like:

          module load IRkernel/1.2-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ISA-L, load one of these modules using a module load command like:

          module load ISA-L/2.30.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ITK, load one of these modules using a module load command like:

          module load ITK/5.2.1-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ImageMagick, load one of these modules using a module load command like:

          module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Imath, load one of these modules using a module load command like:

          module load Imath/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Inferelator, load one of these modules using a module load command like:

          module load Inferelator/0.6.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Infernal, load one of these modules using a module load command like:

          module load Infernal/1.1.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using InterProScan, load one of these modules using a module load command like:

          module load InterProScan/5.62-94.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IonQuant, load one of these modules using a module load command like:

          module load IonQuant/1.10.12-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IsoQuant, load one of these modules using a module load command like:

          module load IsoQuant/3.3.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IsoSeq, load one of these modules using a module load command like:

          module load IsoSeq/4.0.0-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JAGS, load one of these modules using a module load command like:

          module load JAGS/4.3.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JSON-GLib, load one of these modules using a module load command like:

          module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Jansson, load one of these modules using a module load command like:

          module load Jansson/2.13.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JasPer, load one of these modules using a module load command like:

          module load JasPer/4.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Java, load one of these modules using a module load command like:

          module load Java/17.0.6\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Jellyfish, load one of these modules using a module load command like:

          module load Jellyfish/2.3.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JsonCpp, load one of these modules using a module load command like:

          module load JsonCpp/1.9.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Judy, load one of these modules using a module load command like:

          module load Judy/1.0.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Julia, load one of these modules using a module load command like:

          module load Julia/1.9.3-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterHub, load one of these modules using a module load command like:

          module load JupyterHub/4.0.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterLab, load one of these modules using a module load command like:

          module load JupyterLab/4.0.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterNotebook, load one of these modules using a module load command like:

          module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KMC, load one of these modules using a module load command like:

          module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KaHIP, load one of these modules using a module load command like:

          module load KaHIP/3.14-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kaleido, load one of these modules using a module load command like:

          module load Kaleido/0.1.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kalign, load one of these modules using a module load command like:

          module load Kalign/3.3.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kent_tools, load one of these modules using a module load command like:

          module load Kent_tools/20190326-linux.x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Keras, load one of these modules using a module load command like:

          module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KerasTuner, load one of these modules using a module load command like:

          module load KerasTuner/1.3.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kraken, load one of these modules using a module load command like:

          module load Kraken/1.1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kraken2, load one of these modules using a module load command like:

          module load Kraken2/2.1.2-gompi-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KrakenUniq, load one of these modules using a module load command like:

          module load KrakenUniq/1.0.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KronaTools, load one of these modules using a module load command like:

          module load KronaTools/2.8.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAME, load one of these modules using a module load command like:

          module load LAME/3.100-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAMMPS, load one of these modules using a module load command like:

          module load LAMMPS/patch_20Nov2019-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAST, load one of these modules using a module load command like:

          module load LAST/1179-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LASTZ, load one of these modules using a module load command like:

          module load LASTZ/1.04.22-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LDC, load one of these modules using a module load command like:

          module load LDC/1.30.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LERC, load one of these modules using a module load command like:

          module load LERC/4.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LIANA+, load one of these modules using a module load command like:

          module load LIANA+/1.0.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LIBSVM, load one of these modules using a module load command like:

          module load LIBSVM/3.30-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LLVM, load one of these modules using a module load command like:

          module load LLVM/16.0.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LMDB, load one of these modules using a module load command like:

          module load LMDB/0.9.31-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LMfit, load one of these modules using a module load command like:

          module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LPJmL, load one of these modules using a module load command like:

          module load LPJmL/4.0.003-iimpi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LPeg, load one of these modules using a module load command like:

          module load LPeg/1.0.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LSD2, load one of these modules using a module load command like:

          module load LSD2/2.4.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LUMPY, load one of these modules using a module load command like:

          module load LUMPY/0.3.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LZO, load one of these modules using a module load command like:

          module load LZO/2.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using L_RNA_scaffolder, load one of these modules using a module load command like:

          module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lace, load one of these modules using a module load command like:

          module load Lace/1.14.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LevelDB, load one of these modules using a module load command like:

          module load LevelDB/1.22-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Levenshtein, load one of these modules using a module load command like:

          module load Levenshtein/0.24.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LiBis, load one of these modules using a module load command like:

          module load LiBis/20200428-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibLZF, load one of these modules using a module load command like:

          module load LibLZF/3.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibSoup, load one of these modules using a module load command like:

          module load LibSoup/3.0.7-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibTIFF, load one of these modules using a module load command like:

          module load LibTIFF/4.6.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Libint, load one of these modules using a module load command like:

          module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lighter, load one of these modules using a module load command like:

          module load Lighter/1.1.2-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LittleCMS, load one of these modules using a module load command like:

          module load LittleCMS/2.15-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LncLOOM, load one of these modules using a module load command like:

          module load LncLOOM/2.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LoRDEC, load one of these modules using a module load command like:

          module load LoRDEC/0.9-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Longshot, load one of these modules using a module load command like:

          module load Longshot/0.4.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LtrDetector, load one of these modules using a module load command like:

          module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lua, load one of these modules using a module load command like:

          module load Lua/5.4.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using M1QN3, load one of these modules using a module load command like:

          module load M1QN3/3.3-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using M4, load one of these modules using a module load command like:

          module load M4/1.4.19-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MACS2, load one of these modules using a module load command like:

          module load MACS2/2.2.7.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MACS3, load one of these modules using a module load command like:

          module load MACS3/3.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MAFFT, load one of these modules using a module load command like:

          module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MAGeCK, load one of these modules using a module load command like:

          module load MAGeCK/0.5.9.5-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MARS, load one of these modules using a module load command like:

          module load MARS/20191101-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MATIO, load one of these modules using a module load command like:

          module load MATIO/1.5.17-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MATLAB, load one of these modules using a module load command like:

          module load MATLAB/2022b-r5\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MBROLA, load one of these modules using a module load command like:

          module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MCL, load one of these modules using a module load command like:

          module load MCL/22.282-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MDAnalysis, load one of these modules using a module load command like:

          module load MDAnalysis/2.4.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MDTraj, load one of these modules using a module load command like:

          module load MDTraj/1.9.7-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGA, load one of these modules using a module load command like:

          module load MEGA/11.0.10\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGAHIT, load one of these modules using a module load command like:

          module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGAN, load one of these modules using a module load command like:

          module load MEGAN/6.25.3-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEM, load one of these modules using a module load command like:

          module load MEM/20191023-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEME, load one of these modules using a module load command like:

          module load MEME/5.5.4-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MESS, load one of these modules using a module load command like:

          module load MESS/0.1.6-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using METIS, load one of these modules using a module load command like:

          module load METIS/5.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MIGRATE-N, load one of these modules using a module load command like:

          module load MIGRATE-N/5.0.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MMseqs2, load one of these modules using a module load command like:

          module load MMseqs2/14-7e284-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MOABS, load one of these modules using a module load command like:

          module load MOABS/1.3.9.6-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MONAI, load one of these modules using a module load command like:

          module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MOOSE, load one of these modules using a module load command like:

          module load MOOSE/2022-06-10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MPC, load one of these modules using a module load command like:

          module load MPC/1.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MPFR, load one of these modules using a module load command like:

          module load MPFR/4.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MRtrix, load one of these modules using a module load command like:

          module load MRtrix/3.0.4-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MSFragger, load one of these modules using a module load command like:

          module load MSFragger/4.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUMPS, load one of these modules using a module load command like:

          module load MUMPS/5.6.1-foss-2023a-metis\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUMmer, load one of these modules using a module load command like:

          module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUSCLE, load one of these modules using a module load command like:

          module load MUSCLE/5.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MXNet, load one of these modules using a module load command like:

          module load MXNet/1.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MaSuRCA, load one of these modules using a module load command like:

          module load MaSuRCA/4.1.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mako, load one of these modules using a module load command like:

          module load Mako/1.2.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MariaDB-connector-c, load one of these modules using a module load command like:

          module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MariaDB, load one of these modules using a module load command like:

          module load MariaDB/10.9.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mash, load one of these modules using a module load command like:

          module load Mash/2.3-intel-compilers-2021.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Maven, load one of these modules using a module load command like:

          module load Maven/3.6.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MaxBin, load one of these modules using a module load command like:

          module load MaxBin/2.2.7-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MedPy, load one of these modules using a module load command like:

          module load MedPy/0.4.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Megalodon, load one of these modules using a module load command like:

          module load Megalodon/2.3.5-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mercurial, load one of these modules using a module load command like:

          module load Mercurial/6.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mesa, load one of these modules using a module load command like:

          module load Mesa/23.1.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Meson, load one of these modules using a module load command like:

          module load Meson/1.2.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mesquite, load one of these modules using a module load command like:

          module load Mesquite/2.3.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaBAT, load one of these modules using a module load command like:

          module load MetaBAT/2.15-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaEuk, load one of these modules using a module load command like:

          module load MetaEuk/6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaPhlAn, load one of these modules using a module load command like:

          module load MetaPhlAn/4.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Metagenome-Atlas, load one of these modules using a module load command like:

          module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MethylDackel, load one of these modules using a module load command like:

          module load MethylDackel/0.5.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MiXCR, load one of these modules using a module load command like:

          module load MiXCR/4.6.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MicrobeAnnotator, load one of these modules using a module load command like:

          module load MicrobeAnnotator/2.0.5-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mikado, load one of these modules using a module load command like:

          module load Mikado/2.3.4-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MinCED, load one of these modules using a module load command like:

          module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MinPath, load one of these modules using a module load command like:

          module load MinPath/1.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Miniconda3, load one of these modules using a module load command like:

          module load Miniconda3/23.5.2-0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Minipolish, load one of these modules using a module load command like:

          module load Minipolish/0.1.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MitoHiFi, load one of these modules using a module load command like:

          module load MitoHiFi/3.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ModelTest-NG, load one of these modules using a module load command like:

          module load ModelTest-NG/0.1.7-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Molden, load one of these modules using a module load command like:

          module load Molden/6.8-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Molekel, load one of these modules using a module load command like:

          module load Molekel/5.4.0-Linux_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mono, load one of these modules using a module load command like:

          module load Mono/6.8.0.105-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Monocle3, load one of these modules using a module load command like:

          module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MrBayes, load one of these modules using a module load command like:

          module load MrBayes/3.2.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MuJoCo, load one of these modules using a module load command like:

          module load MuJoCo/2.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MultiQC, load one of these modules using a module load command like:

          module load MultiQC/1.14-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MultilevelEstimators, load one of these modules using a module load command like:

          module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Multiwfn, load one of these modules using a module load command like:

          module load Multiwfn/3.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MyCC, load one of these modules using a module load command like:

          module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Myokit, load one of these modules using a module load command like:

          module load Myokit/1.32.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NAMD, load one of these modules using a module load command like:

          module load NAMD/2.14-foss-2023a-mpi\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NASM, load one of these modules using a module load command like:

          module load NASM/2.16.01-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCCL, load one of these modules using a module load command like:

          module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCL, load one of these modules using a module load command like:

          module load NCL/6.6.2-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCO, load one of these modules using a module load command like:

          module load NCO/5.0.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NECI, load one of these modules using a module load command like:

          module load NECI/20230620-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NEURON, load one of these modules using a module load command like:

          module load NEURON/7.8.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NGS, load one of these modules using a module load command like:

          module load NGS/2.11.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NGSpeciesID, load one of these modules using a module load command like:

          module load NGSpeciesID/0.1.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLMpy, load one of these modules using a module load command like:

          module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLTK, load one of these modules using a module load command like:

          module load NLTK/3.8.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLopt, load one of these modules using a module load command like:

          module load NLopt/2.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NOVOPlasty, load one of these modules using a module load command like:

          module load NOVOPlasty/3.7-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NSPR, load one of these modules using a module load command like:

          module load NSPR/4.35-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NSS, load one of these modules using a module load command like:

          module load NSS/3.89.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NVHPC, load one of these modules using a module load command like:

          module load NVHPC/21.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoCaller, load one of these modules using a module load command like:

          module load NanoCaller/3.4.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoComp, load one of these modules using a module load command like:

          module load NanoComp/1.13.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoFilt, load one of these modules using a module load command like:

          module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoPlot, load one of these modules using a module load command like:

          module load NanoPlot/1.33.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoStat, load one of these modules using a module load command like:

          module load NanoStat/1.6.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanopolishComp, load one of these modules using a module load command like:

          module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NetPyNE, load one of these modules using a module load command like:

          module load NetPyNE/1.0.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NewHybrids, load one of these modules using a module load command like:

          module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NextGenMap, load one of these modules using a module load command like:

          module load NextGenMap/0.5.5-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nextflow, load one of these modules using a module load command like:

          module load Nextflow/23.10.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NiBabel, load one of these modules using a module load command like:

          module load NiBabel/4.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nim, load one of these modules using a module load command like:

          module load Nim/1.6.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ninja, load one of these modules using a module load command like:

          module load Ninja/1.11.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nipype, load one of these modules using a module load command like:

          module load Nipype/1.8.5-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OBITools3, load one of these modules using a module load command like:

          module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ONNX-Runtime, load one of these modules using a module load command like:

          module load ONNX-Runtime/1.16.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ONNX, load one of these modules using a module load command like:

          module load ONNX/1.15.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OPERA-MS, load one of these modules using a module load command like:

          module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ORCA, load one of these modules using a module load command like:

          module load ORCA/5.0.4-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

          module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Oases, load one of these modules using a module load command like:

          module load Oases/20180312-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Omnipose, load one of these modules using a module load command like:

          module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenAI-Gym, load one of these modules using a module load command like:

          module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenBLAS, load one of these modules using a module load command like:

          module load OpenBLAS/0.3.24-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenBabel, load one of these modules using a module load command like:

          module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenCV, load one of these modules using a module load command like:

          module load OpenCV/4.6.0-foss-2022a-contrib\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenCoarrays, load one of these modules using a module load command like:

          module load OpenCoarrays/2.8.0-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenEXR, load one of these modules using a module load command like:

          module load OpenEXR/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFOAM-Extend, load one of these modules using a module load command like:

          module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFOAM, load one of these modules using a module load command like:

          module load OpenFOAM/v2206-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFace, load one of these modules using a module load command like:

          module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFold, load one of these modules using a module load command like:

          module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenForceField, load one of these modules using a module load command like:

          module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenImageIO, load one of these modules using a module load command like:

          module load OpenImageIO/2.0.12-iimpi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenJPEG, load one of these modules using a module load command like:

          module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMM-PLUMED, load one of these modules using a module load command like:

          module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMM, load one of these modules using a module load command like:

          module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMMTools, load one of these modules using a module load command like:

          module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMPI, load one of these modules using a module load command like:

          module load OpenMPI/4.1.6-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMolcas, load one of these modules using a module load command like:

          module load OpenMolcas/21.06-iomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenPGM, load one of these modules using a module load command like:

          module load OpenPGM/5.2.122-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenPIV, load one of these modules using a module load command like:

          module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSSL, load one of these modules using a module load command like:

          module load OpenSSL/1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSees, load one of these modules using a module load command like:

          module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSlide-Java, load one of these modules using a module load command like:

          module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSlide, load one of these modules using a module load command like:

          module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Optuna, load one of these modules using a module load command like:

          module load Optuna/3.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OrthoFinder, load one of these modules using a module load command like:

          module load OrthoFinder/2.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Osi, load one of these modules using a module load command like:

          module load Osi/0.108.9-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PASA, load one of these modules using a module load command like:

          module load PASA/2.5.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PBGZIP, load one of these modules using a module load command like:

          module load PBGZIP/20160804-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PCRE, load one of these modules using a module load command like:

          module load PCRE/8.45-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PCRE2, load one of these modules using a module load command like:

          module load PCRE2/10.42-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PEAR, load one of these modules using a module load command like:

          module load PEAR/0.9.11-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PETSc, load one of these modules using a module load command like:

          module load PETSc/3.18.4-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PHYLIP, load one of these modules using a module load command like:

          module load PHYLIP/3.697-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PICRUSt2, load one of these modules using a module load command like:

          module load PICRUSt2/2.5.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLAMS, load one of these modules using a module load command like:

          module load PLAMS/1.5.1-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLINK, load one of these modules using a module load command like:

          module load PLINK/2.00a3.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLUMED, load one of these modules using a module load command like:

          module load PLUMED/2.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLY, load one of these modules using a module load command like:

          module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PMIx, load one of these modules using a module load command like:

          module load PMIx/4.2.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using POT, load one of these modules using a module load command like:

          module load POT/0.9.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using POV-Ray, load one of these modules using a module load command like:

          module load POV-Ray/3.7.0.8-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PPanGGOLiN, load one of these modules using a module load command like:

          module load PPanGGOLiN/1.1.136-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRANK, load one of these modules using a module load command like:

          module load PRANK/170427-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRINSEQ, load one of these modules using a module load command like:

          module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRISMS-PF, load one of these modules using a module load command like:

          module load PRISMS-PF/2.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PROJ, load one of these modules using a module load command like:

          module load PROJ/9.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pandoc, load one of these modules using a module load command like:

          module load Pandoc/2.13\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pango, load one of these modules using a module load command like:

          module load Pango/1.50.14-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParMETIS, load one of these modules using a module load command like:

          module load ParMETIS/4.0.3-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParMGridGen, load one of these modules using a module load command like:

          module load ParMGridGen/1.0-iimpi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParaView, load one of these modules using a module load command like:

          module load ParaView/5.11.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParmEd, load one of these modules using a module load command like:

          module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Parsl, load one of these modules using a module load command like:

          module load Parsl/2023.7.17-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PartitionFinder, load one of these modules using a module load command like:

          module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

          module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Perl, load one of these modules using a module load command like:

          module load Perl/5.38.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Phenoflow, load one of these modules using a module load command like:

          module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PhyloPhlAn, load one of these modules using a module load command like:

          module load PhyloPhlAn/3.0.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pillow-SIMD, load one of these modules using a module load command like:

          module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pillow, load one of these modules using a module load command like:

          module load Pillow/10.2.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pilon, load one of these modules using a module load command like:

          module load Pilon/1.23-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pint, load one of these modules using a module load command like:

          module load Pint/0.22-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PnetCDF, load one of these modules using a module load command like:

          module load PnetCDF/1.12.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Porechop, load one of these modules using a module load command like:

          module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PostgreSQL, load one of these modules using a module load command like:

          module load PostgreSQL/16.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Primer3, load one of these modules using a module load command like:

          module load Primer3/2.5.0-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ProBiS, load one of these modules using a module load command like:

          module load ProBiS/20230403-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ProtHint, load one of these modules using a module load command like:

          module load ProtHint/2.6.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PsiCLASS, load one of these modules using a module load command like:

          module load PsiCLASS/1.0.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PuLP, load one of these modules using a module load command like:

          module load PuLP/2.8.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyBerny, load one of these modules using a module load command like:

          module load PyBerny/0.6.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCairo, load one of these modules using a module load command like:

          module load PyCairo/1.21.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCalib, load one of these modules using a module load command like:

          module load PyCalib/20230531-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCheMPS2, load one of these modules using a module load command like:

          module load PyCheMPS2/1.8.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyFoam, load one of these modules using a module load command like:

          module load PyFoam/2020.5-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyGEOS, load one of these modules using a module load command like:

          module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyGObject, load one of these modules using a module load command like:

          module load PyGObject/3.42.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyInstaller, load one of these modules using a module load command like:

          module load PyInstaller/6.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyKeOps, load one of these modules using a module load command like:

          module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMC, load one of these modules using a module load command like:

          module load PyMC/5.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMC3, load one of these modules using a module load command like:

          module load PyMC3/3.11.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMDE, load one of these modules using a module load command like:

          module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMOL, load one of these modules using a module load command like:

          module load PyMOL/2.5.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOD, load one of these modules using a module load command like:

          module load PyOD/0.8.7-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOpenCL, load one of these modules using a module load command like:

          module load PyOpenCL/2023.1.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOpenGL, load one of these modules using a module load command like:

          module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyPy, load one of these modules using a module load command like:

          module load PyPy/7.3.12-3.10\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyQt5, load one of these modules using a module load command like:

          module load PyQt5/5.15.7-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyQtGraph, load one of these modules using a module load command like:

          module load PyQtGraph/0.13.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyRETIS, load one of these modules using a module load command like:

          module load PyRETIS/2.5.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyRe, load one of these modules using a module load command like:

          module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PySCF, load one of these modules using a module load command like:

          module load PySCF/2.4.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyStan, load one of these modules using a module load command like:

          module load PyStan/2.19.1.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTables, load one of these modules using a module load command like:

          module load PyTables/3.8.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTensor, load one of these modules using a module load command like:

          module load PyTensor/2.17.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Geometric, load one of these modules using a module load command like:

          module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Ignite, load one of these modules using a module load command like:

          module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Lightning, load one of these modules using a module load command like:

          module load PyTorch-Lightning/2.1.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch, load one of these modules using a module load command like:

          module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyVCF, load one of these modules using a module load command like:

          module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyVCF3, load one of these modules using a module load command like:

          module load PyVCF3/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyWBGT, load one of these modules using a module load command like:

          module load PyWBGT/1.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyWavelets, load one of these modules using a module load command like:

          module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyYAML, load one of these modules using a module load command like:

          module load PyYAML/6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyZMQ, load one of these modules using a module load command like:

          module load PyZMQ/25.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PycURL, load one of these modules using a module load command like:

          module load PycURL/7.45.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pychopper, load one of these modules using a module load command like:

          module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pyomo, load one of these modules using a module load command like:

          module load Pyomo/6.4.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pysam, load one of these modules using a module load command like:

          module load Pysam/0.22.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Python-bundle-PyPI, load one of these modules using a module load command like:

          module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Python, load one of these modules using a module load command like:

          module load Python/3.11.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QCA, load one of these modules using a module load command like:

          module load QCA/2.3.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QCxMS, load one of these modules using a module load command like:

          module load QCxMS/5.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QD, load one of these modules using a module load command like:

          module load QD/2.3.17-NVHPC-21.2-20160110\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QGIS, load one of these modules using a module load command like:

          module load QGIS/3.28.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QIIME2, load one of these modules using a module load command like:

          module load QIIME2/2023.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QScintilla, load one of these modules using a module load command like:

          module load QScintilla/2.11.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QUAST, load one of these modules using a module load command like:

          module load QUAST/5.2.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qhull, load one of these modules using a module load command like:

          module load Qhull/2020.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qt5, load one of these modules using a module load command like:

          module load Qt5/5.15.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qt5Webkit, load one of these modules using a module load command like:

          module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QtKeychain, load one of these modules using a module load command like:

          module load QtKeychain/0.13.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QtPy, load one of these modules using a module load command like:

          module load QtPy/2.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qtconsole, load one of these modules using a module load command like:

          module load Qtconsole/5.4.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuPath, load one of these modules using a module load command like:

          module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qualimap, load one of these modules using a module load command like:

          module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuantumESPRESSO, load one of these modules using a module load command like:

          module load QuantumESPRESSO/7.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuickFF, load one of these modules using a module load command like:

          module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qwt, load one of these modules using a module load command like:

          module load Qwt/6.2.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-INLA, load one of these modules using a module load command like:

          module load R-INLA/24.01.18-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

          module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-bundle-CRAN, load one of these modules using a module load command like:

          module load R-bundle-CRAN/2023.12-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R, load one of these modules using a module load command like:

          module load R/4.3.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R2jags, load one of these modules using a module load command like:

          module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RASPA2, load one of these modules using a module load command like:

          module load RASPA2/2.0.41-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RAxML-NG, load one of these modules using a module load command like:

          module load RAxML-NG/1.2.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RAxML, load one of these modules using a module load command like:

          module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDFlib, load one of these modules using a module load command like:

          module load RDFlib/6.2.0-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDKit, load one of these modules using a module load command like:

          module load RDKit/2022.09.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDP-Classifier, load one of these modules using a module load command like:

          module load RDP-Classifier/2.13-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RE2, load one of these modules using a module load command like:

          module load RE2/2023-08-01-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RLCard, load one of these modules using a module load command like:

          module load RLCard/1.0.9-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RMBlast, load one of these modules using a module load command like:

          module load RMBlast/2.11.0-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RNA-Bloom, load one of these modules using a module load command like:

          module load RNA-Bloom/2.0.1-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ROOT, load one of these modules using a module load command like:

          module load ROOT/6.26.06-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RSEM, load one of these modules using a module load command like:

          module load RSEM/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RSeQC, load one of these modules using a module load command like:

          module load RSeQC/4.0.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RStudio-Server, load one of these modules using a module load command like:

          module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RTG-Tools, load one of these modules using a module load command like:

          module load RTG-Tools/3.12.1-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Racon, load one of these modules using a module load command like:

          module load Racon/1.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RagTag, load one of these modules using a module load command like:

          module load RagTag/2.0.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ragout, load one of these modules using a module load command like:

          module load Ragout/2.3-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RapidJSON, load one of these modules using a module load command like:

          module load RapidJSON/1.1.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Raven, load one of these modules using a module load command like:

          module load Raven/1.8.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ray-project, load one of these modules using a module load command like:

          module load Ray-project/1.13.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ray, load one of these modules using a module load command like:

          module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ReFrame, load one of these modules using a module load command like:

          module load ReFrame/4.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Redis, load one of these modules using a module load command like:

          module load Redis/7.0.8-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RegTools, load one of these modules using a module load command like:

          module load RegTools/1.0.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RepeatMasker, load one of these modules using a module load command like:

          module load RepeatMasker/4.1.2-p1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ResistanceGA, load one of these modules using a module load command like:

          module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RevBayes, load one of these modules using a module load command like:

          module load RevBayes/1.2.1-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rgurobi, load one of these modules using a module load command like:

          module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RheoTool, load one of these modules using a module load command like:

          module load RheoTool/5.0-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rmath, load one of these modules using a module load command like:

          module load Rmath/4.3.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RnBeads, load one of these modules using a module load command like:

          module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Roary, load one of these modules using a module load command like:

          module load Roary/3.13.0-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ruby, load one of these modules using a module load command like:

          module load Ruby/3.0.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rust, load one of these modules using a module load command like:

          module load Rust/1.75.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SAMtools, load one of these modules using a module load command like:

          module load SAMtools/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SBCL, load one of these modules using a module load command like:

          module load SBCL/2.2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCENIC, load one of these modules using a module load command like:

          module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCGid, load one of these modules using a module load command like:

          module load SCGid/0.9b0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCOTCH, load one of these modules using a module load command like:

          module load SCOTCH/7.0.3-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCons, load one of these modules using a module load command like:

          module load SCons/4.5.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCopeLoomR, load one of these modules using a module load command like:

          module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SDL2, load one of these modules using a module load command like:

          module load SDL2/2.28.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SDSL, load one of these modules using a module load command like:

          module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SEACells, load one of these modules using a module load command like:

          module load SEACells/20230731-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SECAPR, load one of these modules using a module load command like:

          module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SELFIES, load one of these modules using a module load command like:

          module load SELFIES/2.1.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SEPP, load one of these modules using a module load command like:

          module load SEPP/4.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SHAP, load one of these modules using a module load command like:

          module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SISSO++, load one of these modules using a module load command like:

          module load SISSO++/1.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SISSO, load one of these modules using a module load command like:

          module load SISSO/3.1-20220324-iimpi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SKESA, load one of these modules using a module load command like:

          module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLATEC, load one of these modules using a module load command like:

          module load SLATEC/4.1-GCC-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLEPc, load one of these modules using a module load command like:

          module load SLEPc/3.18.2-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLiM, load one of these modules using a module load command like:

          module load SLiM/3.4-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMAP, load one of these modules using a module load command like:

          module load SMAP/4.6.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMC++, load one of these modules using a module load command like:

          module load SMC++/1.15.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMV, load one of these modules using a module load command like:

          module load SMV/6.7.17-iccifort-2020.4.304\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP-ESA-python, load one of these modules using a module load command like:

          module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP-ESA, load one of these modules using a module load command like:

          module load SNAP-ESA/9.0.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP, load one of these modules using a module load command like:

          module load SNAP/2.0.1-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

          module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPAdes, load one of these modules using a module load command like:

          module load SPAdes/3.15.5-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPM, load one of these modules using a module load command like:

          module load SPM/12.5_r7771-MATLAB-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPOTPY, load one of these modules using a module load command like:

          module load SPOTPY/1.5.14-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SQLite, load one of these modules using a module load command like:

          module load SQLite/3.43.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRA-Toolkit, load one of these modules using a module load command like:

          module load SRA-Toolkit/3.0.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRPRISM, load one of these modules using a module load command like:

          module load SRPRISM/3.1.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRST2, load one of these modules using a module load command like:

          module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SSPACE_Basic, load one of these modules using a module load command like:

          module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SSW, load one of these modules using a module load command like:

          module load SSW/1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STACEY, load one of these modules using a module load command like:

          module load STACEY/1.2.5-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STAR, load one of these modules using a module load command like:

          module load STAR/2.7.11a-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STREAM, load one of these modules using a module load command like:

          module load STREAM/5.10-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STRique, load one of these modules using a module load command like:

          module load STRique/0.4.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SUNDIALS, load one of these modules using a module load command like:

          module load SUNDIALS/6.6.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SUPPA, load one of these modules using a module load command like:

          module load SUPPA/2.3-20231005-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SVIM, load one of these modules using a module load command like:

          module load SVIM/2.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SWIG, load one of these modules using a module load command like:

          module load SWIG/4.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sabre, load one of these modules using a module load command like:

          module load Sabre/2013-09-28-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sailfish, load one of these modules using a module load command like:

          module load Sailfish/0.10.1-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Salmon, load one of these modules using a module load command like:

          module load Salmon/1.9.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sambamba, load one of these modules using a module load command like:

          module load Sambamba/1.0.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Satsuma2, load one of these modules using a module load command like:

          module load Satsuma2/20220304-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ScaFaCoS, load one of these modules using a module load command like:

          module load ScaFaCoS/1.0.1-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ScaLAPACK, load one of these modules using a module load command like:

          module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SciPy-bundle, load one of these modules using a module load command like:

          module load SciPy-bundle/2023.11-gfbf-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Seaborn, load one of these modules using a module load command like:

          module load Seaborn/0.13.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SemiBin, load one of these modules using a module load command like:

          module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sentence-Transformers, load one of these modules using a module load command like:

          module load Sentence-Transformers/2.2.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SentencePiece, load one of these modules using a module load command like:

          module load SentencePiece/0.1.99-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqAn, load one of these modules using a module load command like:

          module load SeqAn/2.4.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqKit, load one of these modules using a module load command like:

          module load SeqKit/2.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqLib, load one of these modules using a module load command like:

          module load SeqLib/1.2.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Serf, load one of these modules using a module load command like:

          module load Serf/1.3.9-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Seurat, load one of these modules using a module load command like:

          module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratData, load one of these modules using a module load command like:

          module load SeuratData/20210514-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratDisk, load one of these modules using a module load command like:

          module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratWrappers, load one of these modules using a module load command like:

          module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Shapely, load one of these modules using a module load command like:

          module load Shapely/2.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Shasta, load one of these modules using a module load command like:

          module load Shasta/0.8.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Short-Pair, load one of these modules using a module load command like:

          module load Short-Pair/20170125-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SiNVICT, load one of these modules using a module load command like:

          module load SiNVICT/1.0-20180817-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sibelia, load one of these modules using a module load command like:

          module load Sibelia/3.0.7-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimNIBS, load one of these modules using a module load command like:

          module load SimNIBS/3.2.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimPEG, load one of these modules using a module load command like:

          module load SimPEG/0.18.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimpleElastix, load one of these modules using a module load command like:

          module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimpleITK, load one of these modules using a module load command like:

          module load SimpleITK/2.1.1.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SlamDunk, load one of these modules using a module load command like:

          module load SlamDunk/0.4.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sniffles, load one of these modules using a module load command like:

          module load Sniffles/2.0.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SoX, load one of these modules using a module load command like:

          module load SoX/14.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Spark, load one of these modules using a module load command like:

          module load Spark/3.5.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SpatialDE, load one of these modules using a module load command like:

          module load SpatialDE/1.1.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Spyder, load one of these modules using a module load command like:

          module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SqueezeMeta, load one of these modules using a module load command like:

          module load SqueezeMeta/1.5.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Squidpy, load one of these modules using a module load command like:

          module load Squidpy/1.2.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Stacks, load one of these modules using a module load command like:

          module load Stacks/2.53-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Stata, load one of these modules using a module load command like:

          module load Stata/15\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Statistics-R, load one of these modules using a module load command like:

          module load Statistics-R/0.34-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

          The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using StringTie, load one of these modules using a module load command like:

          module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Structure, load one of these modules using a module load command like:

          module load Structure/2.3.4-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Structure_threader, load one of these modules using a module load command like:

          module load Structure_threader/1.3.10-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuAVE-biomat, load one of these modules using a module load command like:

          module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Subread, load one of these modules using a module load command like:

          module load Subread/2.0.3-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Subversion, load one of these modules using a module load command like:

          module load Subversion/1.14.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuiteSparse, load one of these modules using a module load command like:

          module load SuiteSparse/7.1.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuperLU, load one of these modules using a module load command like:

          module load SuperLU/5.2.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuperLU_DIST, load one of these modules using a module load command like:

          module load SuperLU_DIST/8.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Szip, load one of these modules using a module load command like:

          module load Szip/2.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TALON, load one of these modules using a module load command like:

          module load TALON/5.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TAMkin, load one of these modules using a module load command like:

          module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TCLAP, load one of these modules using a module load command like:

          module load TCLAP/1.2.4-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

          module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TEtranscripts, load one of these modules using a module load command like:

          module load TEtranscripts/2.2.0-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TOBIAS, load one of these modules using a module load command like:

          module load TOBIAS/0.12.12-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TOPAS, load one of these modules using a module load command like:

          module load TOPAS/3.9-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TRF, load one of these modules using a module load command like:

          module load TRF/4.09.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TRUST4, load one of these modules using a module load command like:

          module load TRUST4/1.0.6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tcl, load one of these modules using a module load command like:

          module load Tcl/8.6.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TensorFlow, load one of these modules using a module load command like:

          module load TensorFlow/2.13.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Theano, load one of these modules using a module load command like:

          module load Theano/1.1.2-intel-2021b-PyMC\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tk, load one of these modules using a module load command like:

          module load Tk/8.6.13-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tkinter, load one of these modules using a module load command like:

          module load Tkinter/3.11.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Togl, load one of these modules using a module load command like:

          module load Togl/2.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tombo, load one of these modules using a module load command like:

          module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TopHat, load one of these modules using a module load command like:

          module load TopHat/2.1.2-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TransDecoder, load one of these modules using a module load command like:

          module load TransDecoder/5.5.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TranscriptClean, load one of these modules using a module load command like:

          module load TranscriptClean/2.0.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Transformers, load one of these modules using a module load command like:

          module load Transformers/4.30.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TreeMix, load one of these modules using a module load command like:

          module load TreeMix/1.13-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trilinos, load one of these modules using a module load command like:

          module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trim_Galore, load one of these modules using a module load command like:

          module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trimmomatic, load one of these modules using a module load command like:

          module load Trimmomatic/0.39-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trinity, load one of these modules using a module load command like:

          module load Trinity/2.15.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Triton, load one of these modules using a module load command like:

          module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trycycler, load one of these modules using a module load command like:

          module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TurboVNC, load one of these modules using a module load command like:

          module load TurboVNC/2.2.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCC, load one of these modules using a module load command like:

          module load UCC/1.2.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCLUST, load one of these modules using a module load command like:

          module load UCLUST/1.2.22q-i86linux64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCX-CUDA, load one of these modules using a module load command like:

          module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCX, load one of these modules using a module load command like:

          module load UCX/1.15.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UDUNITS, load one of these modules using a module load command like:

          module load UDUNITS/2.2.28-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UFL, load one of these modules using a module load command like:

          module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UMI-tools, load one of these modules using a module load command like:

          module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UQTk, load one of these modules using a module load command like:

          module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using USEARCH, load one of these modules using a module load command like:

          module load USEARCH/11.0.667-i86linux32\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UnZip, load one of these modules using a module load command like:

          module load UnZip/6.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UniFrac, load one of these modules using a module load command like:

          module load UniFrac/1.3.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Unicycler, load one of these modules using a module load command like:

          module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Unidecode, load one of these modules using a module load command like:

          module load Unidecode/1.3.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VASP, load one of these modules using a module load command like:

          module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VBZ-Compression, load one of these modules using a module load command like:

          module load VBZ-Compression/1.0.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VCFtools, load one of these modules using a module load command like:

          module load VCFtools/0.1.16-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VEP, load one of these modules using a module load command like:

          module load VEP/107-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VESTA, load one of these modules using a module load command like:

          module load VESTA/3.5.8-gtk3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VMD, load one of these modules using a module load command like:

          module load VMD/1.9.4a51-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VMTK, load one of these modules using a module load command like:

          module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VSCode, load one of these modules using a module load command like:

          module load VSCode/1.85.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VSEARCH, load one of these modules using a module load command like:

          module load VSEARCH/2.22.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VTK, load one of these modules using a module load command like:

          module load VTK/9.2.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VTune, load one of these modules using a module load command like:

          module load VTune/2019_update2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Vala, load one of these modules using a module load command like:

          module load Vala/0.52.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Valgrind, load one of these modules using a module load command like:

          module load Valgrind/3.20.0-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VarScan, load one of these modules using a module load command like:

          module load VarScan/2.4.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Velvet, load one of these modules using a module load command like:

          module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VirSorter2, load one of these modules using a module load command like:

          module load VirSorter2/2.2.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VisPy, load one of these modules using a module load command like:

          module load VisPy/0.12.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Voro++, load one of these modules using a module load command like:

          module load Voro++/0.4.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WFA2, load one of these modules using a module load command like:

          module load WFA2/2.3.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WHAM, load one of these modules using a module load command like:

          module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WIEN2k, load one of these modules using a module load command like:

          module load WIEN2k/21.1-intel-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WPS, load one of these modules using a module load command like:

          module load WPS/4.1-intel-2019b-dmpar\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WRF, load one of these modules using a module load command like:

          module load WRF/4.1.3-intel-2019b-dmpar\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Wannier90, load one of these modules using a module load command like:

          module load Wannier90/3.1.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Wayland, load one of these modules using a module load command like:

          module load Wayland/1.22.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Waylandpp, load one of these modules using a module load command like:

          module load Waylandpp/1.0.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WebKitGTK+, load one of these modules using a module load command like:

          module load WebKitGTK+/2.37.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WhatsHap, load one of these modules using a module load command like:

          module load WhatsHap/1.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Winnowmap, load one of these modules using a module load command like:

          module load Winnowmap/1.0-GCC-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WisecondorX, load one of these modules using a module load command like:

          module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

          The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using X11, load one of these modules using a module load command like:

          module load X11/20230603-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XCFun, load one of these modules using a module load command like:

          module load XCFun/2.1.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XCrySDen, load one of these modules using a module load command like:

          module load XCrySDen/1.6.2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XGBoost, load one of these modules using a module load command like:

          module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XML-Compile, load one of these modules using a module load command like:

          module load XML-Compile/1.63-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XML-LibXML, load one of these modules using a module load command like:

          module load XML-LibXML/2.0208-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XZ, load one of these modules using a module load command like:

          module load XZ/5.4.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Xerces-C++, load one of these modules using a module load command like:

          module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XlsxWriter, load one of these modules using a module load command like:

          module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Xvfb, load one of these modules using a module load command like:

          module load Xvfb/21.1.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YACS, load one of these modules using a module load command like:

          module load YACS/0.1.8-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YANK, load one of these modules using a module load command like:

          module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YAXT, load one of these modules using a module load command like:

          module load YAXT/0.9.1-gompi-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Yambo, load one of these modules using a module load command like:

          module load Yambo/5.1.2-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Yasm, load one of these modules using a module load command like:

          module load Yasm/1.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Z3, load one of these modules using a module load command like:

          module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zeo++, load one of these modules using a module load command like:

          module load Zeo++/0.3-intel-compilers-2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ZeroMQ, load one of these modules using a module load command like:

          module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zip, load one of these modules using a module load command like:

          module load Zip/3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zopfli, load one of these modules using a module load command like:

          module load Zopfli/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

          The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using adjustText, load one of these modules using a module load command like:

          module load adjustText/0.7.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using aiohttp, load one of these modules using a module load command like:

          module load aiohttp/3.8.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alevin-fry, load one of these modules using a module load command like:

          module load alevin-fry/0.4.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alleleCount, load one of these modules using a module load command like:

          module load alleleCount/4.3.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alleleIntegrator, load one of these modules using a module load command like:

          module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alsa-lib, load one of these modules using a module load command like:

          module load alsa-lib/1.2.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anadama2, load one of these modules using a module load command like:

          module load anadama2/0.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using angsd, load one of these modules using a module load command like:

          module load angsd/0.940-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anndata, load one of these modules using a module load command like:

          module load anndata/0.10.5.post1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ant, load one of these modules using a module load command like:

          module load ant/1.10.12-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using antiSMASH, load one of these modules using a module load command like:

          module load antiSMASH/6.0.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anvio, load one of these modules using a module load command like:

          module load anvio/8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using any2fasta, load one of these modules using a module load command like:

          module load any2fasta/0.4.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using apex, load one of these modules using a module load command like:

          module load apex/20210420-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using archspec, load one of these modules using a module load command like:

          module load archspec/0.1.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

          The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using argtable, load one of these modules using a module load command like:

          module load argtable/2.13-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using aria2, load one of these modules using a module load command like:

          module load aria2/1.35.0-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arpack-ng, load one of these modules using a module load command like:

          module load arpack-ng/3.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arrow-R, load one of these modules using a module load command like:

          module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arrow, load one of these modules using a module load command like:

          module load arrow/0.17.1-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using at-spi2-atk, load one of these modules using a module load command like:

          module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

          The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using at-spi2-core, load one of these modules using a module load command like:

          module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using atools, load one of these modules using a module load command like:

          module load atools/1.5.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attr, load one of these modules using a module load command like:

          module load attr/2.5.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attrdict, load one of these modules using a module load command like:

          module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attrdict3, load one of these modules using a module load command like:

          module load attrdict3/2.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

          The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using augur, load one of these modules using a module load command like:

          module load augur/7.0.2-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

          The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using autopep8, load one of these modules using a module load command like:

          module load autopep8/2.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using awscli, load one of these modules using a module load command like:

          module load awscli/2.11.21-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using babl, load one of these modules using a module load command like:

          module load babl/0.1.86-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bam-readcount, load one of these modules using a module load command like:

          module load bam-readcount/0.8.0-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bamFilters, load one of these modules using a module load command like:

          module load bamFilters/2022-06-30-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using barrnap, load one of these modules using a module load command like:

          module load barrnap/0.9-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using basemap, load one of these modules using a module load command like:

          module load basemap/1.3.9-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcbio-gff, load one of these modules using a module load command like:

          module load bcbio-gff/0.7.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcgTree, load one of these modules using a module load command like:

          module load bcgTree/1.2.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcl-convert, load one of these modules using a module load command like:

          module load bcl-convert/4.0.3-2el7.x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcl2fastq2, load one of these modules using a module load command like:

          module load bcl2fastq2/2.20.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using beagle-lib, load one of these modules using a module load command like:

          module load beagle-lib/4.0.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using binutils, load one of these modules using a module load command like:

          module load binutils/2.40-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biobakery-workflows, load one of these modules using a module load command like:

          module load biobakery-workflows/3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biobambam2, load one of these modules using a module load command like:

          module load biobambam2/2.0.185-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biogeme, load one of these modules using a module load command like:

          module load biogeme/3.2.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biom-format, load one of these modules using a module load command like:

          module load biom-format/2.1.15-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bmtagger, load one of these modules using a module load command like:

          module load bmtagger/3.101-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bokeh, load one of these modules using a module load command like:

          module load bokeh/3.2.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using boto3, load one of these modules using a module load command like:

          module load boto3/1.34.10-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bpp, load one of these modules using a module load command like:

          module load bpp/4.4.0-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using btllib, load one of these modules using a module load command like:

          module load btllib/1.7.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

          The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using build, load one of these modules using a module load command like:

          module load build/0.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using buildenv, load one of these modules using a module load command like:

          module load buildenv/default-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using buildingspy, load one of these modules using a module load command like:

          module load buildingspy/4.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bwa-meth, load one of these modules using a module load command like:

          module load bwa-meth/0.2.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bwidget, load one of these modules using a module load command like:

          module load bwidget/1.9.15-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bx-python, load one of these modules using a module load command like:

          module load bx-python/0.10.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bzip2, load one of these modules using a module load command like:

          module load bzip2/1.0.8-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

          The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using c-ares, load one of these modules using a module load command like:

          module load c-ares/1.18.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cURL, load one of these modules using a module load command like:

          module load cURL/8.3.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cairo, load one of these modules using a module load command like:

          module load cairo/1.17.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using canu, load one of these modules using a module load command like:

          module load canu/2.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using carputils, load one of these modules using a module load command like:

          module load carputils/20210513-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ccache, load one of these modules using a module load command like:

          module load ccache/4.6.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cctbx-base, load one of these modules using a module load command like:

          module load cctbx-base/2023.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdbfasta, load one of these modules using a module load command like:

          module load cdbfasta/0.99-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdo-bindings, load one of these modules using a module load command like:

          module load cdo-bindings/1.5.7-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdsapi, load one of these modules using a module load command like:

          module load cdsapi/0.5.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cell2location, load one of these modules using a module load command like:

          module load cell2location/0.05-alpha-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cffi, load one of these modules using a module load command like:

          module load cffi/1.15.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using chemprop, load one of these modules using a module load command like:

          module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using chewBBACA, load one of these modules using a module load command like:

          module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cicero, load one of these modules using a module load command like:

          module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cimfomfa, load one of these modules using a module load command like:

          module load cimfomfa/22.273-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using code-cli, load one of these modules using a module load command like:

          module load code-cli/1.85.1-x64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using code-server, load one of these modules using a module load command like:

          module load code-server/4.9.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

          The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using colossalai, load one of these modules using a module load command like:

          module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using conan, load one of these modules using a module load command like:

          module load conan/1.60.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using configurable-http-proxy, load one of these modules using a module load command like:

          module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cooler, load one of these modules using a module load command like:

          module load cooler/0.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using coverage, load one of these modules using a module load command like:

          module load coverage/7.2.7-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cppy, load one of these modules using a module load command like:

          module load cppy/1.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cpu_features, load one of these modules using a module load command like:

          module load cpu_features/0.6.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cryoDRGN, load one of these modules using a module load command like:

          module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cryptography, load one of these modules using a module load command like:

          module load cryptography/41.0.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuDNN, load one of these modules using a module load command like:

          module load cuDNN/8.9.2.26-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuTENSOR, load one of these modules using a module load command like:

          module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cutadapt, load one of these modules using a module load command like:

          module load cutadapt/4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuteSV, load one of these modules using a module load command like:

          module load cuteSV/2.0.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cython-blis, load one of these modules using a module load command like:

          module load cython-blis/0.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dask, load one of these modules using a module load command like:

          module load dask/2023.12.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dbus-glib, load one of these modules using a module load command like:

          module load dbus-glib/0.112-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dclone, load one of these modules using a module load command like:

          module load dclone/2.3-0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deal.II, load one of these modules using a module load command like:

          module load deal.II/9.3.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

          The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using decona, load one of these modules using a module load command like:

          module load decona/0.1.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deepTools, load one of these modules using a module load command like:

          module load deepTools/3.5.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deepdiff, load one of these modules using a module load command like:

          module load deepdiff/6.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using detectron2, load one of these modules using a module load command like:

          module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

          The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using devbio-napari, load one of these modules using a module load command like:

          module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dicom2nifti, load one of these modules using a module load command like:

          module load dicom2nifti/2.3.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dijitso, load one of these modules using a module load command like:

          module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dill, load one of these modules using a module load command like:

          module load dill/0.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dlib, load one of these modules using a module load command like:

          module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dm-haiku, load one of these modules using a module load command like:

          module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dm-tree, load one of these modules using a module load command like:

          module load dm-tree/0.1.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dorado, load one of these modules using a module load command like:

          module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

          The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using double-conversion, load one of these modules using a module load command like:

          module load double-conversion/3.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using drmaa-python, load one of these modules using a module load command like:

          module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dtcwt, load one of these modules using a module load command like:

          module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using duplex-tools, load one of these modules using a module load command like:

          module load duplex-tools/0.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dynesty, load one of these modules using a module load command like:

          module load dynesty/2.1.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using eSpeak-NG, load one of these modules using a module load command like:

          module load eSpeak-NG/1.50-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ebGSEA, load one of these modules using a module load command like:

          module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ecCodes, load one of these modules using a module load command like:

          module load ecCodes/2.24.2-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using edlib, load one of these modules using a module load command like:

          module load edlib/1.3.9-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using eggnog-mapper, load one of these modules using a module load command like:

          module load eggnog-mapper/2.1.10-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

          The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using einops, load one of these modules using a module load command like:

          module load einops/0.4.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using elfutils, load one of these modules using a module load command like:

          module load elfutils/0.187-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

          The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using elprep, load one of these modules using a module load command like:

          module load elprep/5.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using enchant-2, load one of these modules using a module load command like:

          module load enchant-2/2.3.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using epiScanpy, load one of these modules using a module load command like:

          module load epiScanpy/0.4.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using exiv2, load one of these modules using a module load command like:

          module load exiv2/0.27.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using expat, load one of these modules using a module load command like:

          module load expat/2.5.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using expecttest, load one of these modules using a module load command like:

          module load expecttest/0.1.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fasta-reader, load one of these modules using a module load command like:

          module load fasta-reader/3.0.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastahack, load one of these modules using a module load command like:

          module load fastahack/1.0.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastai, load one of these modules using a module load command like:

          module load fastai/2.7.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastp, load one of these modules using a module load command like:

          module load fastp/0.23.2-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fermi-lite, load one of these modules using a module load command like:

          module load fermi-lite/20190320-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

          The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using festival, load one of these modules using a module load command like:

          module load festival/2.5.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fetchMG, load one of these modules using a module load command like:

          module load fetchMG/1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ffnvcodec, load one of these modules using a module load command like:

          module load ffnvcodec/12.0.16.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

          The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using file, load one of these modules using a module load command like:

          module load file/5.43-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using filevercmp, load one of these modules using a module load command like:

          module load filevercmp/20191210-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using finder, load one of these modules using a module load command like:

          module load finder/1.1.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flair-NLP, load one of these modules using a module load command like:

          module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flatbuffers-python, load one of these modules using a module load command like:

          module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flatbuffers, load one of these modules using a module load command like:

          module load flatbuffers/23.5.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flex, load one of these modules using a module load command like:

          module load flex/2.6.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flit, load one of these modules using a module load command like:

          module load flit/3.9.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flowFDA, load one of these modules using a module load command like:

          module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fmt, load one of these modules using a module load command like:

          module load fmt/10.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fontconfig, load one of these modules using a module load command like:

          module load fontconfig/2.14.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

          The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using foss, load one of these modules using a module load command like:

          module load foss/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fosscuda, load one of these modules using a module load command like:

          module load fosscuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freebayes, load one of these modules using a module load command like:

          module load freebayes/1.3.5-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freeglut, load one of these modules using a module load command like:

          module load freeglut/3.2.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freetype-py, load one of these modules using a module load command like:

          module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freetype, load one of these modules using a module load command like:

          module load freetype/2.13.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fsom, load one of these modules using a module load command like:

          module load fsom/20151117-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using funannotate, load one of these modules using a module load command like:

          module load funannotate/1.8.13-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2clib, load one of these modules using a module load command like:

          module load g2clib/1.6.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2lib, load one of these modules using a module load command like:

          module load g2lib/3.1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2log, load one of these modules using a module load command like:

          module load g2log/1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

          The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using garnett, load one of these modules using a module load command like:

          module load garnett/0.1.20-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gawk, load one of these modules using a module load command like:

          module load gawk/5.1.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gbasis, load one of these modules using a module load command like:

          module load gbasis/20210904-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gc, load one of these modules using a module load command like:

          module load gc/8.2.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcccuda, load one of these modules using a module load command like:

          module load gcccuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcloud, load one of these modules using a module load command like:

          module load gcloud/382.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcsfs, load one of these modules using a module load command like:

          module load gcsfs/2023.12.2.post1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gdbm, load one of these modules using a module load command like:

          module load gdbm/1.18.1-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gdc-client, load one of these modules using a module load command like:

          module load gdc-client/1.6.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gengetopt, load one of these modules using a module load command like:

          module load gengetopt/2.23-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using genomepy, load one of these modules using a module load command like:

          module load genomepy/0.15.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using genozip, load one of these modules using a module load command like:

          module load genozip/13.0.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gensim, load one of these modules using a module load command like:

          module load gensim/4.2.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using geopandas, load one of these modules using a module load command like:

          module load geopandas/0.12.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gettext, load one of these modules using a module load command like:

          module load gettext/0.22-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gexiv2, load one of these modules using a module load command like:

          module load gexiv2/0.12.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gfbf, load one of these modules using a module load command like:

          module load gfbf/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gffread, load one of these modules using a module load command like:

          module load gffread/0.12.7-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gffutils, load one of these modules using a module load command like:

          module load gffutils/0.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gflags, load one of these modules using a module load command like:

          module load gflags/2.2.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using giflib, load one of these modules using a module load command like:

          module load giflib/5.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using git-lfs, load one of these modules using a module load command like:

          module load git-lfs/3.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

          The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using git, load one of these modules using a module load command like:

          module load git/2.42.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glew, load one of these modules using a module load command like:

          module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glib-networking, load one of these modules using a module load command like:

          module load glib-networking/2.72.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glibc, load one of these modules using a module load command like:

          module load glibc/2.30-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glog, load one of these modules using a module load command like:

          module load glog/0.6.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gmpy2, load one of these modules using a module load command like:

          module load gmpy2/2.1.5-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gmsh, load one of these modules using a module load command like:

          module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gnuplot, load one of these modules using a module load command like:

          module load gnuplot/5.4.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

          The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using goalign, load one of these modules using a module load command like:

          module load goalign/0.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gobff, load one of these modules using a module load command like:

          module load gobff/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gomkl, load one of these modules using a module load command like:

          module load gomkl/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gompi, load one of these modules using a module load command like:

          module load gompi/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gompic, load one of these modules using a module load command like:

          module load gompic/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using googletest, load one of these modules using a module load command like:

          module load googletest/1.13.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gotree, load one of these modules using a module load command like:

          module load gotree/0.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gperf, load one of these modules using a module load command like:

          module load gperf/3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gperftools, load one of these modules using a module load command like:

          module load gperftools/2.14-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gpustat, load one of these modules using a module load command like:

          module load gpustat/0.6.0-gcccuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using graphite2, load one of these modules using a module load command like:

          module load graphite2/1.3.14-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using graphviz-python, load one of these modules using a module load command like:

          module load graphviz-python/0.20.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using grid, load one of these modules using a module load command like:

          module load grid/20220610-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using groff, load one of these modules using a module load command like:

          module load groff/1.22.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gzip, load one of these modules using a module load command like:

          module load gzip/1.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using h5netcdf, load one of these modules using a module load command like:

          module load h5netcdf/1.2.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using h5py, load one of these modules using a module load command like:

          module load h5py/3.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

          The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using harmony, load one of these modules using a module load command like:

          module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hatchling, load one of these modules using a module load command like:

          module load hatchling/1.18.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

          The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using help2man, load one of these modules using a module load command like:

          module load help2man/1.49.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hierfstat, load one of these modules using a module load command like:

          module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hifiasm, load one of these modules using a module load command like:

          module load hifiasm/0.19.7-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hiredis, load one of these modules using a module load command like:

          module load hiredis/1.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

          The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using histolab, load one of these modules using a module load command like:

          module load histolab/0.4.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hmmlearn, load one of these modules using a module load command like:

          module load hmmlearn/0.3.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using horton, load one of these modules using a module load command like:

          module load horton/2.1.1-intel-2020a-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

          The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using how_are_we_stranded_here, load one of these modules using a module load command like:

          module load how_are_we_stranded_here/1.0.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

          The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using humann, load one of these modules using a module load command like:

          module load humann/3.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hunspell, load one of these modules using a module load command like:

          module load hunspell/1.7.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hwloc, load one of these modules using a module load command like:

          module load hwloc/2.9.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hyperopt, load one of these modules using a module load command like:

          module load hyperopt/0.2.5-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hypothesis, load one of these modules using a module load command like:

          module load hypothesis/6.90.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iccifort, load one of these modules using a module load command like:

          module load iccifort/2020.4.304\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iccifortcuda, load one of these modules using a module load command like:

          module load iccifortcuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ichorCNA, load one of these modules using a module load command like:

          module load ichorCNA/0.3.2-20191219-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using idemux, load one of these modules using a module load command like:

          module load idemux/0.1.6-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using igraph, load one of these modules using a module load command like:

          module load igraph/0.10.10-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

          The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using igvShiny, load one of these modules using a module load command like:

          module load igvShiny/20240112-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iibff, load one of these modules using a module load command like:

          module load iibff/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iimpi, load one of these modules using a module load command like:

          module load iimpi/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iimpic, load one of these modules using a module load command like:

          module load iimpic/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imagecodecs, load one of these modules using a module load command like:

          module load imagecodecs/2022.9.26-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imageio, load one of these modules using a module load command like:

          module load imageio/2.22.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imbalanced-learn, load one of these modules using a module load command like:

          module load imbalanced-learn/0.10.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imgaug, load one of these modules using a module load command like:

          module load imgaug/0.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imkl-FFTW, load one of these modules using a module load command like:

          module load imkl-FFTW/2023.1.0-iimpi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imkl, load one of these modules using a module load command like:

          module load imkl/2023.1.0-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using impi, load one of these modules using a module load command like:

          module load impi/2021.9.0-intel-compilers-2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imutils, load one of these modules using a module load command like:

          module load imutils/0.5.4-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using inferCNV, load one of these modules using a module load command like:

          module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using infercnvpy, load one of these modules using a module load command like:

          module load infercnvpy/0.4.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

          The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using inflection, load one of these modules using a module load command like:

          module load inflection/1.3.5-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intel-compilers, load one of these modules using a module load command like:

          module load intel-compilers/2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intel, load one of these modules using a module load command like:

          module load intel/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intelcuda, load one of these modules using a module load command like:

          module load intelcuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intervaltree-python, load one of these modules using a module load command like:

          module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intervaltree, load one of these modules using a module load command like:

          module load intervaltree/0.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intltool, load one of these modules using a module load command like:

          module load intltool/0.51.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iodata, load one of these modules using a module load command like:

          module load iodata/1.0.0a2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iomkl, load one of these modules using a module load command like:

          module load iomkl/2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iompi, load one of these modules using a module load command like:

          module load iompi/2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using isoCirc, load one of these modules using a module load command like:

          module load isoCirc/1.0.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jax, load one of these modules using a module load command like:

          module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jbigkit, load one of these modules using a module load command like:

          module load jbigkit/2.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jemalloc, load one of these modules using a module load command like:

          module load jemalloc/5.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jobcli, load one of these modules using a module load command like:

          module load jobcli/0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using joypy, load one of these modules using a module load command like:

          module load joypy/0.2.4-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using json-c, load one of these modules using a module load command like:

          module load json-c/0.16-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

          module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-server-proxy, load one of these modules using a module load command like:

          module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-server, load one of these modules using a module load command like:

          module load jupyter-server/2.7.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jxrlib, load one of these modules using a module load command like:

          module load jxrlib/1.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kallisto, load one of these modules using a module load command like:

          module load kallisto/0.48.0-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kb-python, load one of these modules using a module load command like:

          module load kb-python/0.27.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kim-api, load one of these modules using a module load command like:

          module load kim-api/2.3.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kineto, load one of these modules using a module load command like:

          module load kineto/0.4.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kma, load one of these modules using a module load command like:

          module load kma/1.2.22-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kneaddata, load one of these modules using a module load command like:

          module load kneaddata/0.12.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

          The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using krbalancing, load one of these modules using a module load command like:

          module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lancet, load one of these modules using a module load command like:

          module load lancet/1.1.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lavaan, load one of these modules using a module load command like:

          module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using leafcutter, load one of these modules using a module load command like:

          module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using legacy-job-wrappers, load one of these modules using a module load command like:

          module load legacy-job-wrappers/0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using leidenalg, load one of these modules using a module load command like:

          module load leidenalg/0.10.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lftp, load one of these modules using a module load command like:

          module load lftp/4.9.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libBigWig, load one of these modules using a module load command like:

          module load libBigWig/0.4.4-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libFLAME, load one of these modules using a module load command like:

          module load libFLAME/5.2.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libGLU, load one of these modules using a module load command like:

          module load libGLU/9.0.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libRmath, load one of these modules using a module load command like:

          module load libRmath/4.1.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libaec, load one of these modules using a module load command like:

          module load libaec/1.0.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libaio, load one of these modules using a module load command like:

          module load libaio/0.3.113-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libarchive, load one of these modules using a module load command like:

          module load libarchive/3.7.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libavif, load one of these modules using a module load command like:

          module load libavif/0.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcdms, load one of these modules using a module load command like:

          module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcerf, load one of these modules using a module load command like:

          module load libcerf/2.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcint, load one of these modules using a module load command like:

          module load libcint/5.5.0-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdap, load one of these modules using a module load command like:

          module load libdap/3.20.7-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libde265, load one of these modules using a module load command like:

          module load libde265/1.0.11-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdeflate, load one of these modules using a module load command like:

          module load libdeflate/1.19-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdrm, load one of these modules using a module load command like:

          module load libdrm/2.4.115-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdrs, load one of these modules using a module load command like:

          module load libdrs/3.1.2-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libepoxy, load one of these modules using a module load command like:

          module load libepoxy/1.5.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libev, load one of these modules using a module load command like:

          module load libev/4.33-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libevent, load one of these modules using a module load command like:

          module load libevent/2.1.12-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libfabric, load one of these modules using a module load command like:

          module load libfabric/1.19.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libffi, load one of these modules using a module load command like:

          module load libffi/3.4.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgcrypt, load one of these modules using a module load command like:

          module load libgcrypt/1.9.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgd, load one of these modules using a module load command like:

          module load libgd/2.3.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgeotiff, load one of these modules using a module load command like:

          module load libgeotiff/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgit2, load one of these modules using a module load command like:

          module load libgit2/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libglvnd, load one of these modules using a module load command like:

          module load libglvnd/1.6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgpg-error, load one of these modules using a module load command like:

          module load libgpg-error/1.42-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgpuarray, load one of these modules using a module load command like:

          module load libgpuarray/0.7.6-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libharu, load one of these modules using a module load command like:

          module load libharu/2.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libheif, load one of these modules using a module load command like:

          module load libheif/1.16.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libiconv, load one of these modules using a module load command like:

          module load libiconv/1.17-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libidn, load one of these modules using a module load command like:

          module load libidn/1.38-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libidn2, load one of these modules using a module load command like:

          module load libidn2/2.3.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libjpeg-turbo, load one of these modules using a module load command like:

          module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libjxl, load one of these modules using a module load command like:

          module load libjxl/0.8.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libleidenalg, load one of these modules using a module load command like:

          module load libleidenalg/0.11.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmad, load one of these modules using a module load command like:

          module load libmad/0.15.1b-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmatheval, load one of these modules using a module load command like:

          module load libmatheval/1.1.11-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmaus2, load one of these modules using a module load command like:

          module load libmaus2/2.0.813-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmypaint, load one of these modules using a module load command like:

          module load libmypaint/1.6.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libobjcryst, load one of these modules using a module load command like:

          module load libobjcryst/2021.1.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libogg, load one of these modules using a module load command like:

          module load libogg/1.3.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libopus, load one of these modules using a module load command like:

          module load libopus/1.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpciaccess, load one of these modules using a module load command like:

          module load libpciaccess/0.17-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpng, load one of these modules using a module load command like:

          module load libpng/1.6.40-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpsl, load one of these modules using a module load command like:

          module load libpsl/0.21.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libreadline, load one of these modules using a module load command like:

          module load libreadline/8.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librosa, load one of these modules using a module load command like:

          module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librsvg, load one of these modules using a module load command like:

          module load librsvg/2.51.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librttopo, load one of these modules using a module load command like:

          module load librttopo/1.1.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsigc++, load one of these modules using a module load command like:

          module load libsigc++/2.10.8-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsndfile, load one of these modules using a module load command like:

          module load libsndfile/1.2.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsodium, load one of these modules using a module load command like:

          module load libsodium/1.0.18-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libspatialindex, load one of these modules using a module load command like:

          module load libspatialindex/1.9.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libspatialite, load one of these modules using a module load command like:

          module load libspatialite/5.0.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtasn1, load one of these modules using a module load command like:

          module load libtasn1/4.18.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtirpc, load one of these modules using a module load command like:

          module load libtirpc/1.3.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtool, load one of these modules using a module load command like:

          module load libtool/2.4.7-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libunistring, load one of these modules using a module load command like:

          module load libunistring/1.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libunwind, load one of these modules using a module load command like:

          module load libunwind/1.6.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvdwxc, load one of these modules using a module load command like:

          module load libvdwxc/0.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvorbis, load one of these modules using a module load command like:

          module load libvorbis/1.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvori, load one of these modules using a module load command like:

          module load libvori/220621-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libwebp, load one of these modules using a module load command like:

          module load libwebp/1.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libwpe, load one of these modules using a module load command like:

          module load libwpe/1.13.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxc, load one of these modules using a module load command like:

          module load libxc/6.2.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxml++, load one of these modules using a module load command like:

          module load libxml++/2.42.1-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxml2, load one of these modules using a module load command like:

          module load libxml2/2.11.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxslt, load one of these modules using a module load command like:

          module load libxslt/1.1.38-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxsmm, load one of these modules using a module load command like:

          module load libxsmm/1.17-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libyaml, load one of these modules using a module load command like:

          module load libyaml/0.2.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libzip, load one of these modules using a module load command like:

          module load libzip/1.7.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lifelines, load one of these modules using a module load command like:

          module load lifelines/0.27.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using likwid, load one of these modules using a module load command like:

          module load likwid/5.0.1-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lmoments3, load one of these modules using a module load command like:

          module load lmoments3/1.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using longread_umi, load one of these modules using a module load command like:

          module load longread_umi/0.3.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using loomR, load one of these modules using a module load command like:

          module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using loompy, load one of these modules using a module load command like:

          module load loompy/3.0.7-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using louvain, load one of these modules using a module load command like:

          module load louvain/0.8.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lpsolve, load one of these modules using a module load command like:

          module load lpsolve/5.5.2.11-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lxml, load one of these modules using a module load command like:

          module load lxml/4.9.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lz4, load one of these modules using a module load command like:

          module load lz4/1.9.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maeparser, load one of these modules using a module load command like:

          module load maeparser/1.3.0-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

          The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using magma, load one of these modules using a module load command like:

          module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mahotas, load one of these modules using a module load command like:

          module load mahotas/1.4.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

          The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using make, load one of these modules using a module load command like:

          module load make/4.4.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

          The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using makedepend, load one of these modules using a module load command like:

          module load makedepend/1.0.6-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using makeinfo, load one of these modules using a module load command like:

          module load makeinfo/7.0.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using manta, load one of these modules using a module load command like:

          module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mapDamage, load one of these modules using a module load command like:

          module load mapDamage/2.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using matplotlib, load one of these modules using a module load command like:

          module load matplotlib/3.7.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maturin, load one of these modules using a module load command like:

          module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mauveAligner, load one of these modules using a module load command like:

          module load mauveAligner/4736-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maze, load one of these modules using a module load command like:

          module load maze/20170124-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mcu, load one of these modules using a module load command like:

          module load mcu/2021-04-06-gomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using medImgProc, load one of these modules using a module load command like:

          module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

          The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using medaka, load one of these modules using a module load command like:

          module load medaka/1.11.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meshalyzer, load one of these modules using a module load command like:

          module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meshtool, load one of these modules using a module load command like:

          module load meshtool/16-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meson-python, load one of these modules using a module load command like:

          module load meson-python/0.15.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using metaWRAP, load one of these modules using a module load command like:

          module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using metaerg, load one of these modules using a module load command like:

          module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using methylpy, load one of these modules using a module load command like:

          module load methylpy/1.2.9-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mgen, load one of these modules using a module load command like:

          module load mgen/1.2.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mgltools, load one of these modules using a module load command like:

          module load mgltools/1.5.7\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mhcnuggets, load one of these modules using a module load command like:

          module load mhcnuggets/2.3-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using microctools, load one of these modules using a module load command like:

          module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minibar, load one of these modules using a module load command like:

          module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minimap2, load one of these modules using a module load command like:

          module load minimap2/2.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minizip, load one of these modules using a module load command like:

          module load minizip/1.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

          The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using misha, load one of these modules using a module load command like:

          module load misha/4.0.10-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mkl-service, load one of these modules using a module load command like:

          module load mkl-service/2.3.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mm-common, load one of these modules using a module load command like:

          module load mm-common/1.0.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

          The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using molmod, load one of these modules using a module load command like:

          module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mongolite, load one of these modules using a module load command like:

          module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using monitor, load one of these modules using a module load command like:

          module load monitor/1.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mosdepth, load one of these modules using a module load command like:

          module load mosdepth/0.3.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

          The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using motionSegmentation, load one of these modules using a module load command like:

          module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mpath, load one of these modules using a module load command like:

          module load mpath/1.1.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mpi4py, load one of these modules using a module load command like:

          module load mpi4py/3.1.4-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mrcfile, load one of these modules using a module load command like:

          module load mrcfile/1.3.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

          The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using muParser, load one of these modules using a module load command like:

          module load muParser/2.3.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mujoco-py, load one of these modules using a module load command like:

          module load mujoco-py/2.3.7-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using multichoose, load one of these modules using a module load command like:

          module load multichoose/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mygene, load one of these modules using a module load command like:

          module load mygene/3.2.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mysqlclient, load one of these modules using a module load command like:

          module load mysqlclient/2.1.1-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

          The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using n2v, load one of these modules using a module load command like:

          module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanocompore, load one of these modules using a module load command like:

          module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanofilt, load one of these modules using a module load command like:

          module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanoget, load one of these modules using a module load command like:

          module load nanoget/1.18.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanomath, load one of these modules using a module load command like:

          module load nanomath/1.3.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanopolish, load one of these modules using a module load command like:

          module load nanopolish/0.14.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

          The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using napari, load one of these modules using a module load command like:

          module load napari/0.4.18-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncbi-vdb, load one of these modules using a module load command like:

          module load ncbi-vdb/3.0.2-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncdf4, load one of these modules using a module load command like:

          module load ncdf4/1.17-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncolor, load one of these modules using a module load command like:

          module load ncolor/1.2.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncurses, load one of these modules using a module load command like:

          module load ncurses/6.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncview, load one of these modules using a module load command like:

          module load ncview/2.1.7-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF-C++4, load one of these modules using a module load command like:

          module load netCDF-C++4/4.3.1-iimpi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF-Fortran, load one of these modules using a module load command like:

          module load netCDF-Fortran/4.6.0-iimpi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF, load one of these modules using a module load command like:

          module load netCDF/4.9.2-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netcdf4-python, load one of these modules using a module load command like:

          module load netcdf4-python/1.6.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nettle, load one of these modules using a module load command like:

          module load nettle/3.9.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using networkx, load one of these modules using a module load command like:

          module load networkx/3.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nghttp2, load one of these modules using a module load command like:

          module load nghttp2/1.48.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nghttp3, load one of these modules using a module load command like:

          module load nghttp3/0.6.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nglview, load one of these modules using a module load command like:

          module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ngtcp2, load one of these modules using a module load command like:

          module load ngtcp2/0.7.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nichenetr, load one of these modules using a module load command like:

          module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nlohmann_json, load one of these modules using a module load command like:

          module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nnU-Net, load one of these modules using a module load command like:

          module load nnU-Net/1.7.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nodejs, load one of these modules using a module load command like:

          module load nodejs/18.17.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

          The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using noise, load one of these modules using a module load command like:

          module load noise/1.2.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nsync, load one of these modules using a module load command like:

          module load nsync/1.26.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ntCard, load one of these modules using a module load command like:

          module load ntCard/1.2.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

          The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using num2words, load one of these modules using a module load command like:

          module load num2words/0.5.10-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numactl, load one of these modules using a module load command like:

          module load numactl/2.0.16-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numba, load one of these modules using a module load command like:

          module load numba/0.58.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numexpr, load one of these modules using a module load command like:

          module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nvtop, load one of these modules using a module load command like:

          module load nvtop/1.2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using olaFlow, load one of these modules using a module load command like:

          module load olaFlow/20210820-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

          The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using olego, load one of these modules using a module load command like:

          module load olego/1.1.9-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using onedrive, load one of these modules using a module load command like:

          module load onedrive/2.4.21-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ont-fast5-api, load one of these modules using a module load command like:

          module load ont-fast5-api/4.1.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openCARP, load one of these modules using a module load command like:

          module load openCARP/6.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openkim-models, load one of these modules using a module load command like:

          module load openkim-models/20190725-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openpyxl, load one of these modules using a module load command like:

          module load openpyxl/3.1.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openslide-python, load one of these modules using a module load command like:

          module load openslide-python/1.2.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using orca, load one of these modules using a module load command like:

          module load orca/1.3.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p11-kit, load one of these modules using a module load command like:

          module load p11-kit/0.24.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p4est, load one of these modules using a module load command like:

          module load p4est/2.8-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p7zip, load one of these modules using a module load command like:

          module load p7zip/17.03-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pIRS, load one of these modules using a module load command like:

          module load pIRS/2.0.2-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

          The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using packmol, load one of these modules using a module load command like:

          module load packmol/v20.2.2-iccifort-2020.1.217\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pagmo, load one of these modules using a module load command like:

          module load pagmo/2.17.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pairtools, load one of these modules using a module load command like:

          module load pairtools/0.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using panaroo, load one of these modules using a module load command like:

          module load panaroo/1.2.8-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pandas, load one of these modules using a module load command like:

          module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parallel-fastq-dump, load one of these modules using a module load command like:

          module load parallel-fastq-dump/0.6.7-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parallel, load one of these modules using a module load command like:

          module load parallel/20230722-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parasail, load one of these modules using a module load command like:

          module load parasail/2.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using patchelf, load one of these modules using a module load command like:

          module load patchelf/0.18.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pauvre, load one of these modules using a module load command like:

          module load pauvre/0.1924-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pblat, load one of these modules using a module load command like:

          module load pblat/2.5.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pdsh, load one of these modules using a module load command like:

          module load pdsh/2.34-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

          The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using peakdetect, load one of these modules using a module load command like:

          module load peakdetect/1.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using petsc4py, load one of these modules using a module load command like:

          module load petsc4py/3.17.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pftoolsV3, load one of these modules using a module load command like:

          module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phonemizer, load one of these modules using a module load command like:

          module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phonopy, load one of these modules using a module load command like:

          module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phototonic, load one of these modules using a module load command like:

          module load phototonic/2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phyluce, load one of these modules using a module load command like:

          module load phyluce/1.7.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using picard, load one of these modules using a module load command like:

          module load picard/2.25.1-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pigz, load one of these modules using a module load command like:

          module load pigz/2.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pixman, load one of these modules using a module load command like:

          module load pixman/0.42.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkg-config, load one of these modules using a module load command like:

          module load pkg-config/0.29.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkgconf, load one of these modules using a module load command like:

          module load pkgconf/2.0.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkgconfig, load one of these modules using a module load command like:

          module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plot1cell, load one of these modules using a module load command like:

          module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plotly-orca, load one of these modules using a module load command like:

          module load plotly-orca/1.3.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plotly.py, load one of these modules using a module load command like:

          module load plotly.py/5.16.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pocl, load one of these modules using a module load command like:

          module load pocl/4.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pod5-file-format, load one of these modules using a module load command like:

          module load pod5-file-format/0.1.8-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

          The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using poetry, load one of these modules using a module load command like:

          module load poetry/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

          The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using polars, load one of these modules using a module load command like:

          module load polars/0.15.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using poppler, load one of these modules using a module load command like:

          module load poppler/23.09.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using popscle, load one of these modules using a module load command like:

          module load popscle/0.1-beta-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using porefoam, load one of these modules using a module load command like:

          module load porefoam/2021-09-21-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

          The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using powerlaw, load one of these modules using a module load command like:

          module load powerlaw/1.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pplacer, load one of these modules using a module load command like:

          module load pplacer/1.1.alpha19\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using preseq, load one of these modules using a module load command like:

          module load preseq/3.2.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using presto, load one of these modules using a module load command like:

          module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pretty-yaml, load one of these modules using a module load command like:

          module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using prodigal, load one of these modules using a module load command like:

          module load prodigal/2.6.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

          The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using prokka, load one of these modules using a module load command like:

          module load prokka/1.14.5-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using protobuf-python, load one of these modules using a module load command like:

          module load protobuf-python/4.24.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using protobuf, load one of these modules using a module load command like:

          module load protobuf/24.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

          The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using psutil, load one of these modules using a module load command like:

          module load psutil/5.9.5-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using psycopg2, load one of these modules using a module load command like:

          module load psycopg2/2.9.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pugixml, load one of these modules using a module load command like:

          module load pugixml/1.12.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pullseq, load one of these modules using a module load command like:

          module load pullseq/1.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

          The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using purge_dups, load one of these modules using a module load command like:

          module load purge_dups/1.2.5-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pv, load one of these modules using a module load command like:

          module load pv/1.7.24-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using py-cpuinfo, load one of these modules using a module load command like:

          module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

          The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using py3Dmol, load one of these modules using a module load command like:

          module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyBigWig, load one of these modules using a module load command like:

          module load pyBigWig/0.3.18-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyEGA3, load one of these modules using a module load command like:

          module load pyEGA3/5.0.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyGenomeTracks, load one of these modules using a module load command like:

          module load pyGenomeTracks/3.8-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pySCENIC, load one of these modules using a module load command like:

          module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyWannier90, load one of these modules using a module load command like:

          module load pyWannier90/2021-12-07-gomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pybedtools, load one of these modules using a module load command like:

          module load pybedtools/0.9.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pybind11, load one of these modules using a module load command like:

          module load pybind11/2.11.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pycocotools, load one of these modules using a module load command like:

          module load pycocotools/2.0.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pycodestyle, load one of these modules using a module load command like:

          module load pycodestyle/2.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydantic, load one of these modules using a module load command like:

          module load pydantic/2.5.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydicom, load one of these modules using a module load command like:

          module load pydicom/2.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydot, load one of these modules using a module load command like:

          module load pydot/1.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyfaidx, load one of these modules using a module load command like:

          module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyfasta, load one of these modules using a module load command like:

          module load pyfasta/0.5.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pygmo, load one of these modules using a module load command like:

          module load pygmo/2.16.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pygraphviz, load one of these modules using a module load command like:

          module load pygraphviz/1.11-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyiron, load one of these modules using a module load command like:

          module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymatgen, load one of these modules using a module load command like:

          module load pymatgen/2022.9.21-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymbar, load one of these modules using a module load command like:

          module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymca, load one of these modules using a module load command like:

          module load pymca/5.6.3-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyobjcryst, load one of these modules using a module load command like:

          module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyodbc, load one of these modules using a module load command like:

          module load pyodbc/4.0.39-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyparsing, load one of these modules using a module load command like:

          module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyproj, load one of these modules using a module load command like:

          module load pyproj/3.6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyro-api, load one of these modules using a module load command like:

          module load pyro-api/0.1.2-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyro-ppl, load one of these modules using a module load command like:

          module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pysamstats, load one of these modules using a module load command like:

          module load pysamstats/1.1.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pysndfx, load one of these modules using a module load command like:

          module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyspoa, load one of these modules using a module load command like:

          module load pyspoa/0.0.9-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-flakefinder, load one of these modules using a module load command like:

          module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-rerunfailures, load one of these modules using a module load command like:

          module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-shard, load one of these modules using a module load command like:

          module load pytest-shard/0.1.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-xdist, load one of these modules using a module load command like:

          module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest, load one of these modules using a module load command like:

          module load pytest/7.4.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pythermalcomfort, load one of these modules using a module load command like:

          module load pythermalcomfort/2.8.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-Levenshtein, load one of these modules using a module load command like:

          module load python-Levenshtein/0.12.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-igraph, load one of these modules using a module load command like:

          module load python-igraph/0.11.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-irodsclient, load one of these modules using a module load command like:

          module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-isal, load one of these modules using a module load command like:

          module load python-isal/1.1.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-louvain, load one of these modules using a module load command like:

          module load python-louvain/0.16-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-parasail, load one of these modules using a module load command like:

          module load python-parasail/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-telegram-bot, load one of these modules using a module load command like:

          module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-weka-wrapper3, load one of these modules using a module load command like:

          module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pythran, load one of these modules using a module load command like:

          module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using qcat, load one of these modules using a module load command like:

          module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using qnorm, load one of these modules using a module load command like:

          module load qnorm/0.8.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rMATS-turbo, load one of these modules using a module load command like:

          module load rMATS-turbo/4.1.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

          The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using radian, load one of these modules using a module load command like:

          module load radian/0.6.9-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rasterio, load one of these modules using a module load command like:

          module load rasterio/1.3.8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rasterstats, load one of these modules using a module load command like:

          module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rclone, load one of these modules using a module load command like:

          module load rclone/1.65.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using re2c, load one of these modules using a module load command like:

          module load re2c/3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using redis-py, load one of these modules using a module load command like:

          module load redis-py/4.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using regionmask, load one of these modules using a module load command like:

          module load regionmask/0.10.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

          The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using request, load one of these modules using a module load command like:

          module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rethinking, load one of these modules using a module load command like:

          module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rgdal, load one of these modules using a module load command like:

          module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rgeos, load one of these modules using a module load command like:

          module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rickflow, load one of these modules using a module load command like:

          module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rioxarray, load one of these modules using a module load command like:

          module load rioxarray/0.11.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rjags, load one of these modules using a module load command like:

          module load rjags/4-13-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rmarkdown, load one of these modules using a module load command like:

          module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rpy2, load one of these modules using a module load command like:

          module load rpy2/3.5.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rstanarm, load one of these modules using a module load command like:

          module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rstudio, load one of these modules using a module load command like:

          module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ruamel.yaml, load one of these modules using a module load command like:

          module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ruffus, load one of these modules using a module load command like:

          module load ruffus/2.8.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using s3fs, load one of these modules using a module load command like:

          module load s3fs/2023.12.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

          The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using samblaster, load one of these modules using a module load command like:

          module load samblaster/0.1.26-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using samclip, load one of these modules using a module load command like:

          module load samclip/0.4.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sansa, load one of these modules using a module load command like:

          module load sansa/0.0.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sbt, load one of these modules using a module load command like:

          module load sbt/1.3.13-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scArches, load one of these modules using a module load command like:

          module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scCODA, load one of these modules using a module load command like:

          module load scCODA/0.1.9-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scGeneFit, load one of these modules using a module load command like:

          module load scGeneFit/1.0.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scHiCExplorer, load one of these modules using a module load command like:

          module load scHiCExplorer/7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scPred, load one of these modules using a module load command like:

          module load scPred/1.9.2-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scVelo, load one of these modules using a module load command like:

          module load scVelo/0.2.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scanpy, load one of these modules using a module load command like:

          module load scanpy/1.9.8-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sceasy, load one of these modules using a module load command like:

          module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scib-metrics, load one of these modules using a module load command like:

          module load scib-metrics/0.3.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scib, load one of these modules using a module load command like:

          module load scib/1.1.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-bio, load one of these modules using a module load command like:

          module load scikit-bio/0.5.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-build, load one of these modules using a module load command like:

          module load scikit-build/0.17.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-extremes, load one of these modules using a module load command like:

          module load scikit-extremes/2022.4.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-image, load one of these modules using a module load command like:

          module load scikit-image/0.19.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-learn, load one of these modules using a module load command like:

          module load scikit-learn/1.4.0-gfbf-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-misc, load one of these modules using a module load command like:

          module load scikit-misc/0.1.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-optimize, load one of these modules using a module load command like:

          module load scikit-optimize/0.9.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scipy, load one of these modules using a module load command like:

          module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scrublet, load one of these modules using a module load command like:

          module load scrublet/0.2.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scvi-tools, load one of these modules using a module load command like:

          module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using segemehl, load one of these modules using a module load command like:

          module load segemehl/0.3.4-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

          The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using segmentation-models, load one of these modules using a module load command like:

          module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

          The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using semla, load one of these modules using a module load command like:

          module load semla/1.1.6-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using seqtk, load one of these modules using a module load command like:

          module load seqtk/1.4-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

          The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using setuptools-rust, load one of these modules using a module load command like:

          module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using setuptools, load one of these modules using a module load command like:

          module load setuptools/64.0.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sf, load one of these modules using a module load command like:

          module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

          The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using shovill, load one of these modules using a module load command like:

          module load shovill/1.1.0-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

          The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using silhouetteRank, load one of these modules using a module load command like:

          module load silhouetteRank/1.0.5.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using silx, load one of these modules using a module load command like:

          module load silx/0.14.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slepc4py, load one of these modules using a module load command like:

          module load slepc4py/3.17.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slow5tools, load one of these modules using a module load command like:

          module load slow5tools/0.4.0-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slurm-drmaa, load one of these modules using a module load command like:

          module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smfishHmrf, load one of these modules using a module load command like:

          module load smfishHmrf/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smithwaterman, load one of these modules using a module load command like:

          module load smithwaterman/20160702-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smooth-topk, load one of these modules using a module load command like:

          module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snakemake, load one of these modules using a module load command like:

          module load snakemake/8.4.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snappy, load one of these modules using a module load command like:

          module load snappy/1.1.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snippy, load one of these modules using a module load command like:

          module load snippy/4.6.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snp-sites, load one of these modules using a module load command like:

          module load snp-sites/2.5.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snpEff, load one of these modules using a module load command like:

          module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using solo, load one of these modules using a module load command like:

          module load solo/1.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sonic, load one of these modules using a module load command like:

          module load sonic/20180202-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spaCy, load one of these modules using a module load command like:

          module load spaCy/3.4.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spaln, load one of these modules using a module load command like:

          module load spaln/2.4.13f-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sparse-neighbors-search, load one of these modules using a module load command like:

          module load sparse-neighbors-search/0.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sparsehash, load one of these modules using a module load command like:

          module load sparsehash/2.0.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spatialreg, load one of these modules using a module load command like:

          module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using speech_tools, load one of these modules using a module load command like:

          module load speech_tools/2.5.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spglib-python, load one of these modules using a module load command like:

          module load spglib-python/2.0.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spoa, load one of these modules using a module load command like:

          module load spoa/4.0.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using stardist, load one of these modules using a module load command like:

          module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

          The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using stars, load one of these modules using a module load command like:

          module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

          The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using statsmodels, load one of these modules using a module load command like:

          module load statsmodels/0.14.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

          The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using suave, load one of these modules using a module load command like:

          module load suave/20160529-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

          The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using supernova, load one of these modules using a module load command like:

          module load supernova/2.0.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

          The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using swissknife, load one of these modules using a module load command like:

          module load swissknife/1.80-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sympy, load one of these modules using a module load command like:

          module load sympy/1.12-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using synapseclient, load one of these modules using a module load command like:

          module load synapseclient/3.0.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using synthcity, load one of these modules using a module load command like:

          module load synthcity/0.2.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tMAE, load one of these modules using a module load command like:

          module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tabixpp, load one of these modules using a module load command like:

          module load tabixpp/1.1.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using task-spooler, load one of these modules using a module load command like:

          module load task-spooler/1.0.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using taxator-tk, load one of these modules using a module load command like:

          module load taxator-tk/1.3.3-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tbb, load one of these modules using a module load command like:

          module load tbb/2021.5.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tbl2asn, load one of these modules using a module load command like:

          module load tbl2asn/20220427-linux64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tcsh, load one of these modules using a module load command like:

          module load tcsh/6.24.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorboard, load one of these modules using a module load command like:

          module load tensorboard/2.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorboardX, load one of these modules using a module load command like:

          module load tensorboardX/2.6.2.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorflow-probability, load one of these modules using a module load command like:

          module load tensorflow-probability/0.19.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using texinfo, load one of these modules using a module load command like:

          module load texinfo/6.7-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using texlive, load one of these modules using a module load command like:

          module load texlive/20230313-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tidymodels, load one of these modules using a module load command like:

          module load tidymodels/1.1.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

          The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using time, load one of these modules using a module load command like:

          module load time/1.9-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using timm, load one of these modules using a module load command like:

          module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tmux, load one of these modules using a module load command like:

          module load tmux/3.2a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tokenizers, load one of these modules using a module load command like:

          module load tokenizers/0.13.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchaudio, load one of these modules using a module load command like:

          module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchtext, load one of these modules using a module load command like:

          module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchvf, load one of these modules using a module load command like:

          module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchvision, load one of these modules using a module load command like:

          module load torchvision/0.14.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tornado, load one of these modules using a module load command like:

          module load tornado/6.3.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tqdm, load one of these modules using a module load command like:

          module load tqdm/4.66.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

          The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using treatSens, load one of these modules using a module load command like:

          module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using trimAl, load one of these modules using a module load command like:

          module load trimAl/1.4.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tsne, load one of these modules using a module load command like:

          module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

          The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using typing-extensions, load one of these modules using a module load command like:

          module load typing-extensions/4.9.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using umap-learn, load one of these modules using a module load command like:

          module load umap-learn/0.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using umi4cPackage, load one of these modules using a module load command like:

          module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

          The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using uncertainties, load one of these modules using a module load command like:

          module load uncertainties/3.1.7-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

          The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using uncertainty-calibration, load one of these modules using a module load command like:

          module load uncertainty-calibration/0.0.9-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using unimap, load one of these modules using a module load command like:

          module load unimap/0.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using unixODBC, load one of these modules using a module load command like:

          module load unixODBC/2.3.11-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using utf8proc, load one of these modules using a module load command like:

          module load utf8proc/2.8.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using util-linux, load one of these modules using a module load command like:

          module load util-linux/2.39-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vConTACT2, load one of these modules using a module load command like:

          module load vConTACT2/0.11.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vaeda, load one of these modules using a module load command like:

          module load vaeda/0.0.30-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vbz_compression, load one of these modules using a module load command like:

          module load vbz_compression/1.0.1-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vcflib, load one of these modules using a module load command like:

          module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using velocyto, load one of these modules using a module load command like:

          module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using virtualenv, load one of these modules using a module load command like:

          module load virtualenv/20.24.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vispr, load one of these modules using a module load command like:

          module load vispr/0.4.14-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vitessce-python, load one of these modules using a module load command like:

          module load vitessce-python/20230222-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vitessceR, load one of these modules using a module load command like:

          module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vsc-mympirun, load one of these modules using a module load command like:

          module load vsc-mympirun/5.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vt, load one of these modules using a module load command like:

          module load vt/0.57721-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wandb, load one of these modules using a module load command like:

          module load wandb/0.13.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using waves2Foam, load one of these modules using a module load command like:

          module load waves2Foam/20200703-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wget, load one of these modules using a module load command like:

          module load wget/1.21.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wgsim, load one of these modules using a module load command like:

          module load wgsim/20111017-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

          The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using worker, load one of these modules using a module load command like:

          module load worker/1.6.13-iimpi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wpebackend-fdo, load one of these modules using a module load command like:

          module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wrapt, load one of these modules using a module load command like:

          module load wrapt/1.15.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wrf-python, load one of these modules using a module load command like:

          module load wrf-python/1.3.4.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wtdbg2, load one of these modules using a module load command like:

          module load wtdbg2/2.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wxPython, load one of these modules using a module load command like:

          module load wxPython/4.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wxWidgets, load one of these modules using a module load command like:

          module load wxWidgets/3.2.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

          The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using x264, load one of these modules using a module load command like:

          module load x264/20230226-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

          The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using x265, load one of these modules using a module load command like:

          module load x265/3.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xESMF, load one of these modules using a module load command like:

          module load xESMF/0.3.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xarray, load one of these modules using a module load command like:

          module load xarray/2023.9.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xorg-macros, load one of these modules using a module load command like:

          module load xorg-macros/1.20.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xprop, load one of these modules using a module load command like:

          module load xprop/1.2.5-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xproto, load one of these modules using a module load command like:

          module load xproto/7.0.31-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xtb, load one of these modules using a module load command like:

          module load xtb/6.6.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xxd, load one of these modules using a module load command like:

          module load xxd/9.0.2112-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using yaff, load one of these modules using a module load command like:

          module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using yaml-cpp, load one of these modules using a module load command like:

          module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zUMIs, load one of these modules using a module load command like:

          module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zarr, load one of these modules using a module load command like:

          module load zarr/2.16.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zfp, load one of these modules using a module load command like:

          module load zfp/1.0.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zlib-ng, load one of these modules using a module load command like:

          module load zlib-ng/2.0.7-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zlib, load one of these modules using a module load command like:

          module load zlib/1.2.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zstd, load one of these modules using a module load command like:

          module load zstd/1.5.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

          Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

          "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
          • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

          • Includes researchers from all VSC partners.

          • Usage is free of charge.

          • Use your account credentials at your affiliated university to request a VSC-id and connect.

          • See Getting a HPC Account.

          "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
          • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

          • HPC-UGent promotes using the Tier1 services of the VSC.

          • HPC-UGent can act as a liason.

          "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
          • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

          • Same conditions apply, free of charge for all Flemish university associations.

          • Use your university account credentials to request a VSC-id and connect.

          "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

          Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

          "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
          • VSC has a dedicated service geared towards industry.

          • HPC-UGent can act as a liason to the VSC services.

          "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
          • Interested in collaborating in supercomputing with a UGent research group?

          • We can help you look for a collaborative partner. Contact hpc@ugent.be.

          "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

          (more info soon)

          "}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

          Use the menu on the left to navigate, or use the search box on the top right.

          You are viewing documentation intended for people using Linux.

          Use the OS dropdown in the top bar to switch to a different operating system.

          Quick links

          • Getting Started | Getting Access
          • Recording of HPC-UGent intro
          • Linux Tutorial
          • Hardware overview
          • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
          • FAQ | Troubleshooting | Best practices | Known issues

          If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

          If you still have any questions, you can contact the HPC-UGent team.

          "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

          New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

          If you want to use software that's not yet installed on the HPC, send us a software installation request.

          Overview of HPC-UGent Tier-2 infrastructure

          "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

          An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

          See also: Running batch jobs.

          "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

          When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

          Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

          Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

          If the package or library you want is not available, send us a software installation request.

          "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

          Modules each come with a suffix that describes the toolchain used to install them.

          Examples:

          • AlphaFold/2.2.2-foss-2021a

          • tqdm/4.61.2-GCCcore-10.3.0

          • Python/3.9.5-GCCcore-10.3.0

          • matplotlib/3.4.2-foss-2021a

          Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

          The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          You can use module avail [search_text] to see which versions on which toolchains are available to use.

          If you need something that's not available yet, you can request it through a software installation request.

          It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

          "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

          When incompatible modules are loaded, you might encounter an error like this:

          Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

          You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

          Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

          An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          See also: How do I choose the job modules?

          "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

          The 72 hour walltime limit will not be extended. However, you can work around this barrier:

          • Check that all available resources are being used. See also:
            • How many cores/nodes should I request?.
            • My job is slow.
            • My job isn't using any GPUs.
          • Use a faster cluster.
          • Divide the job into more parallel processes.
          • Divide the job into shorter processes, which you can submit as separate jobs.
          • Use the built-in checkpointing of your software.
          "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

          Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

          When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

          Try requesting a bit more memory than your proportional share, and see if that solves the issue.

          See also: Specifying memory requirements.

          "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

          When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

          It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

          See also: Running interactive jobs.

          "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

          Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

          Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

          See also: HPC-UGent GPU clusters.

          "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

          There are a few possible causes why a job can perform worse than expected.

          Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

          Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

          Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

          "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

          Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

          To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

          See also: Multi core jobs/Parallel Computing and Mympirun.

          "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

          For example, we have a simple script (./hello.sh):

          #!/bin/bash \necho \"hello world\"\n

          And we run it like mympirun ./hello.sh --output output.txt.

          To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

          mympirun --output output.txt ./hello.sh\n
          "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

          See the explanation about how jobs get prioritized in When will my job start.

          "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

          When trying to create files, errors like this can occur:

          No space left on device\n

          The error \"No space left on device\" can mean two different things:

          • all available storage quota on the file system in question has been used;
          • the inode limit has been reached on that file system.

          An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

          Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

          If the problem persists, feel free to contact support.

          "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

          NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

          See https://helpdesk.ugent.be/account/en/regels.php.

          If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

          "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

          Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

          $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

          For more information about chmod or setfacl, see Linux tutorial.

          "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

          Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

          "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

          Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

          If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

          "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

          On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

          MacOS & Linux (on Windows, only the second part is shown):

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

          Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

          "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

          A Virtual Organisation consists of a number of members and moderators. A moderator can:

          • Manage the VO members (but can't access/remove their data on the system).

          • See how much storage each member has used, and set limits per member.

          • Request additional storage for the VO.

          One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

          See also: Virtual Organisations.

          "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

          After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

          See also: Your UGent home drive and shares.

          "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

          Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

          du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

          The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

          The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

          "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

          By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

          You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

          "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

          When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

          sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

          A lot of tasks can be performed without sudo, including installing software in your own account.

          Installing software

          • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
          • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
          "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

          Who can I contact?

          • General questions regarding HPC-UGent and VSC: hpc@ugent.be

          • HPC-UGent Tier-2: hpc@ugent.be

          • VSC Tier-1 compute: compute@vscentrum.be

          • VSC Tier-1 cloud: cloud@vscentrum.be

          "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

          Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

          "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

          The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

          "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

          Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

          module load hod\n
          "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

          The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

          As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

          For example, this will work as expected:

          $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

          Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

          "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

          The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

          $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

          By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

          If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

          Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

          "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

          After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

          These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

          You should occasionally clean this up using hod clean:

          $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
          Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

          "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

          If you have any questions, or are experiencing problems using HOD, you have a couple of options:

          • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

          • Contact the HPC-UGent team via hpc@ugent.be

          • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

          "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

          Note

          To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

          Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

          "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

          The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

          Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

          Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

          "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

          Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

          To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

          $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

          After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

          To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

          First, we copy the magicsquare.m example that comes with MATLAB to example.m:

          cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

          To compile a MATLAB program, use mcc -mv:

          mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
          "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

          To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

          It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

          For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

          "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

          If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

          export _JAVA_OPTIONS=\"-Xmx64M\"\n

          The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

          Another possible issue is that the heap size is too small. This could result in errors like:

          Error: Out of memory\n

          A possible solution to this is by setting the maximum heap size to be bigger:

          export _JAVA_OPTIONS=\"-Xmx512M\"\n
          "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

          MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

          The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

          You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

          parpool.m
          % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

          See also the parpool documentation.

          "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

          Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

          MATLAB_LOG_DIR=<OUTPUT_DIR>\n

          where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

          # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

          You should remove the directory at the end of your job script:

          rm -rf $MATLAB_LOG_DIR\n
          "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

          When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

          The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

          export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

          So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

          "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

          All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

          jobscript.sh
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
          "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

          VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

          Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

          Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

          "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

          First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

          $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

          When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

          Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

          It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

          "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

          You can get a list of running VNC servers on a node with

          $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

          This only displays the running VNC servers on the login node you run the command on.

          To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

          $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

          This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

          "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

          The VNC server runs on a (in the example above, on gligar07.gastly.os).

          In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

          Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

          To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

          The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

          "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

          The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

          The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

          So, in our running example, both the source and destination ports are 5906.

          "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

          In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

          If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

          In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

          To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

          Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

          In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

          We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

          "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

          First, we will set up the SSH tunnel from our workstation to .

          Use the settings specified in the sections above:

          • source port: the port on which the VNC server is running (see Determining the source/destination port);

          • destination host: localhost;

          • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

          Execute the following command to set up the SSH tunnel.

          ssh -L 5906:localhost:12345  vsc40000@login.hpc.ugent.be\n

          Replace the source port 5906, destination port 12345 and user ID vsc40000 with your own!

          With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

          Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

          "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

          Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

          You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

          netstat -an | grep -i listen | grep tcp | grep 12345\n

          If you see no matching lines, then the port you picked is still available, and you can continue.

          If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

          $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
          "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

          In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

          To do this, run the following command:

          $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

          With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

          Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

          **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

          As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

          "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

          Download and setup a VNC client. A good choice is tigervnc. You can start it with the vncviewer command.

          Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

          When prompted for a password, use the password you used to setup the VNC server.

          When prompted for default or empty panel, choose default.

          If you have an empty panel, you can reset your settings with the following commands:

          xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
          "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

          The VNC server can be killed by running

          vncserver -kill :6\n

          where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

          "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

          You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

          "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

          All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

          See HPC policies for more information on who is entitled to an account.

          The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

          There are two methods for connecting to HPC-UGent infrastructure:

          • Using a terminal to connect via SSH.
          • Using the web portal

          The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

          If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

          The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

          "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
          • an SSH public/private key pair can be seen as a lock and a key

          • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

          • the SSH private key is like a physical key: you don't hand it out to other people.

          • anyone who has the key (and the optional password) can unlock the door and log in to the account.

          • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

          Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). Launch a terminal from your desktop's application menu and you will see the bash shell. There are other shells, but most Linux distributions use bash by default.

          "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

          Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

          \"Secure\" means that:

          1. the User is authenticated to the System; and

          2. the System is authenticated to the User; and

          3. all data is encrypted during transfer.

          OpenSSH is a FREE implementation of the SSH connectivity protocol. Linux comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

          $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

          To access the clusters and transfer your files, you will use the following commands:

          1. ssh-keygen: to generate the SSH key pair (public + private key);

          2. ssh: to open a shell on a remote machine;

          3. sftp: a secure equivalent of ftp;

          4. scp: a secure equivalent of the remote copy command rcp.

          "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

          A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

          ls ~/.ssh\n

          If a key-pair is already available, you would normally get:

          authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

          Otherwise, the command will show:

          ls: .ssh: No such file or directory\n

          You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

          You will need to generate a new key pair, when:

          1. you don't have a key pair yet

          2. you forgot the passphrase protecting your private key

          3. your private key was compromised

          4. your key pair is too short or not the right type

          For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

          ssh-keygen -t rsa -b 4096\n

          This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

          Without your key pair, you won't be able to apply for a personal VSC account.

          "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

          Most recent Unix derivatives include by default an SSH agent (\"gnome-keyring-daemon\" in most cases) to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

          Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

          ssh-add\n

          Tip

          Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

          Check that your key is available from the keyring with:

          ssh-add -l\n

          After these changes the key agent will keep your SSH key to connect to the clusters as usual.

          Tip

          You should execute ssh-add command again if you generate a new SSH key.

          Visit https://wiki.gnome.org/Projects/GnomeKeyring/Ssh for more information.

          "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

          Visit https://account.vscentrum.be/

          You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

          Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

          Click Confirm

          You will now be taken to the authentication page of your institute.

          You will now have to log in with CAS using your UGent account.

          You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

          After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

          This file has been stored in the directory \"~/.ssh/\".

          After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

          "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

          Within one day, you should receive a Welcome e-mail with your VSC account details.

          Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

          Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

          "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

          In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

          1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

          2. Go to https://account.vscentrum.be/django/account/edit

          3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

          4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

          5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

          "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

          A typical Computation workflow will be:

          1. Connect to the HPC

          2. Transfer your files to the HPC

          3. Compile your code and test it

          4. Create a job script

          5. Submit your job

          6. Wait while

            1. your job gets into the queue

            2. your job gets executed

            3. your job finishes

          7. Move your results

          We'll take you through the different tasks one by one in the following chapters.

          "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

          AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

          See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

          "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

          This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

          • AlphaFold website: https://alphafold.com/
          • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
          • AlphaFold FAQ: https://alphafold.com/faq
          • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
          • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
          • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
            • recording available on YouTube
            • slides available here (PDF)
            • see also https://www.vscentrum.be/alphafold
          "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

          Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

          $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

          To use AlphaFold, you should load a particular module, for example:

          module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

          We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

          Warning

          When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

          Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

          $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

          The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

          As of writing this documentation the latest version is 20230310.

          Info

          The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

          The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

          "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

          The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

          export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

          Use newest version

          Do not forget to replace 20230310 with a more up to date version if available.

          "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

          AlphaFold provides a script called run_alphafold.py

          A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

          The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

          Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

          For more information about the script and options see this section in the official README.

          READ README

          It is strongly advised to read the official README provided by DeepMind before continuing.

          "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

          The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

          Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

          Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

          Info

          Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

          "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

          The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

          Using --db_preset=full_dbs, the following runtime data was collected:

          • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
          • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
          • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
          • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

          This highlights a couple of important attention points:

          • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
          • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
          • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

          With --db_preset=casp14, it is clearly more demanding:

          • On doduo, with 24 cores (1 node): still running after 48h...
          • On joltik, 1 V100 GPU + 8 cores: 4h 48min

          This highlights the difference between CPU and GPU performance even more.

          "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

          The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

          Do not forget to set up the environment (see above: Setting up the environment).

          "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

          Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

          >sequence_name\n<SEQUENCE>\n

          Then run the following command in the same directory:

          alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

          See AlphaFold output, for information about the outputs.

          Info

          For more scenarios see the example section in the official README.

          "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

          The following two example job scripts can be used as a starting point for running AlphaFold.

          The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

          To run the job scripts you need to create a file named T1050.fasta with the following content:

          >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
          source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

          "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

          Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

          Swap to the joltik GPU before submitting it:

          module swap cluster/joltik\n
          AlphaFold-gpu-joltik.sh
          #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
          "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

          Jobscript that runs AlphaFold on CPU using 24 cores on one node.

          AlphaFold-cpu-doduo.sh
          #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

          In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

          "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

          Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

          This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

          "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

          The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via hpc@ugent.be.

          "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

          Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

          Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

          # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
          "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

          Create a job script like:

          #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

          Create an example myscript.sh:

          #!/bin/bash\n\n# prime factors\nfactor 1234567\n
          "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
          #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before apptainer execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

          For example to compile an MPI example:

          module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

          Example MPI job script:

          #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
          "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
          1. Before starting, you should always check:

            • Are there any errors in the script?

            • Are the required modules loaded?

            • Is the correct executable used?

          2. Check your computer requirements upfront, and request the correct resources in your batch job script.

            • Number of requested cores

            • Amount of requested memory

            • Requested network type

          3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

          4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

          5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

          6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

          7. Submit your job and wait (be patient) ...

          8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

          9. The runtime is limited by the maximum walltime of the queues.

          10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

          11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

          12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

          "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

          All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

          Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

          "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

          In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

          "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

          To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

          In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

          In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

          Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

          Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

          Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

          "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

          Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

          All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

          A typical process looks like:

          1. Copy your software to the login-node of the HPC

          2. Start an interactive session on a compute node;

          3. Compile it;

          4. Test it locally;

          5. Generate your job scripts;

          6. Test it on the HPC

          7. Run it (in parallel);

          We assume you've copied your software to the HPC. The next step is to request your private compute node.

          $ qsub -I\nqsub: waiting for job 123456 to start\n
          "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

          Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

          We now list the directory and explore the contents of the \"hello.c\" program:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

          The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

          We first need to compile this C-file into an executable with the gcc-compiler.

          First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

          $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

          A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

          Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

          $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

          It seems to work, now run it on the HPC

          qsub hello.pbs\n

          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          List the directory and explore the contents of the \"mpihello.c\" program:

          $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          mpihello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

          The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

          Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

          mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

          A new file \"hello\" has been created. Note that this program has \"execute\" rights.

          Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n
          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

          We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

          module purge\nmodule load intel\n

          Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

          mpiicc -o mpihello mpihello.c\nls -l\n

          Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n

          Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

          Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

          Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

          Before you can really start using the HPC clusters, there are several things you need to do or know:

          1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

          2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

          3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

          4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

          "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

          Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

          VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

          All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

          • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

          • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

            • While this web connection is active new SSH sessions can be started.

            • Active SSH sessions will remain active even when this web page is closed.

          • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

          Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

          ssh_exchange_identification: read: Connection reset by peer\n
          "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

          The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

          If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

          "}, {"location": "connecting/#connect", "title": "Connect", "text": "

          Open up a terminal and enter the following command to connect to the HPC.

          ssh vsc40000@login.hpc.ugent.be\n

          Here, user vsc40000 wants to make a connection to the \"hpcugent\" cluster at UGent via the login node \"login.hpc.ugent.be\", so replace vsc40000 with your own VSC id in the above command.

          The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

          A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

          Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

          In this case, use the -i option for the ssh command to specify the location of your private key. For example:

          ssh -i /home/example/my_keys\n

          Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

          $ pwd\n/user/home/gent/vsc400/vsc40000\n

          Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

          $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

          This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

          You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

          As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

          $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

          This directory contains:

          1. This HPC Tutorial (in either a Mac, Linux or Windows version).

          2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

          cd examples\n

          Tip

          Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

          Tip

          For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

          The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Go to your home directory, check your own private examples directory, ...\u00a0and start working.

          cd\nls -l\n

          Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

          Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

          You can exit the connection at anytime by entering:

          $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

          tip: Setting your Language right

          You may encounter a warning message similar to the following one during connecting:

          perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
          or any other error message complaining about the locale.

          This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

          LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

          A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

          Note

          If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

          Open the .bashrc on your local machine with your favourite editor and add the following lines:

          $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

          tip: vi

          To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

          or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

          echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

          You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

          "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

          Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. Linux ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

          "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

          Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

          It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

          Open an additional terminal window and check that you're working on your local machine.

          $ hostname\n<local-machine-name>\n

          If you're still using the terminal that is connected to the HPC, close the connection by typing \"exit\" in the terminal window.

          For example, we will copy the (local) file \"localfile.txt\" to your home directory on the HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc40000\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc40000@login.hpc.ugent.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

          $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc40000@login.hpc.ugent.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

          Connect to the HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

          $ pwd\n/user/home/gent/vsc400/vsc40000\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

          The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-Linux-Gent.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

          First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

          $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc40000 Sep 11 09:53 intro-HPC-Linux-Gent.pdf\n

          Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

          $ scp vsc40000@login.hpc.ugent.be:./docs/intro-HPC-Linux-Gent.pdf .\nintro-HPC-Linux-Gent.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

          The file has been copied from the HPC to your local computer.

          It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

          scp -r dataset vsc40000@login.hpc.ugent.be:scratch\n

          If you don't use the -r option to copy a directory, you will run into the following error:

          $ scp dataset vsc40000@login.hpc.ugent.be:scratch\ndataset: not a regular file\n
          "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

          The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

          The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

          One easy way of starting a sftp session is

          sftp vsc40000@login.hpc.ugent.be\n

          Typical and popular commands inside an sftp session are:

          cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the HPC remote machine) ls Get a list of the files in the current directory on the HPC. get fibo.py Copy the file \"fibo.py\" from the HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the HPC."}, {"location": "connecting/#using-a-gui", "title": "Using a GUI", "text": "

          If you prefer a GUI to transfer files back and forth to the HPC, you can use your file browser. Open your file browser and press Ctrl+l

          This should open up a address bar where you can enter a URL. Alternatively, look for the \"connect to server\" option in your file browsers menu.

          Enter: sftp://vsc40000@login.hpc.ugent.be/ and press enter.

          You should now be able to browse files on the HPC in your file browser.

          "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

          See the section on rsync in chapter 5 of the Linux intro manual.

          "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

          It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

          For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

          ssh gligar07.gastly.os\n
          This is also possible the other way around.

          If you want to find out which login host you are connected to, you can use the hostname command.

          $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

          Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

          • screen
          • tmux
          "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

          It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

          In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

          Check if any cron script is already set in the current login node with:

          crontab -l\n

          At this point you can add/edit (with vi editor) any cron script running the command:

          crontab -e\n
          "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
           15 5 * * * ~/runscript.sh >& ~/job.out\n

          where runscript.sh has these lines in this example:

          runscript.sh
          #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

          In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

          Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

          ssh gligar07    # or gligar08\n
          "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

          You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

          EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

          "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

          For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

          • applying custom patches to the software that only you or your group are using

          • evaluating new software versions prior to requesting a central software installation

          • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

          "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

          Before you use EasyBuild, you need to configure it:

          "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

          This is where EasyBuild can find software sources:

          EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
          • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

          • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

          "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

          This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

          export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

          On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

          "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

          This is where EasyBuild will install the software (and accompanying modules) to.

          For example, to let it use $VSC_DATA/easybuild, use:

          export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

          Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

          Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

          To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

          "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

          Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

          module load EasyBuild\n
          "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

          EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

          $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

          For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

          eb example-1.2.1-foss-2024a.eb --robot\n
          "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

          To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

          To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

          eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

          To try to install example v1.2.5 with a different compiler toolchain:

          eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
          "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

          To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

          "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

          To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

          module use $EASYBUILD_INSTALLPATH/modules/all\n

          It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

          "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

          As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

          Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

          Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

          There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

          Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

          Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

          This chapter shows you how to measure:

          1. Walltime
          2. Memory usage
          3. CPU usage
          4. Disk (storage) needs
          5. Network bottlenecks

          First, we allocate a compute node and move to our relevant directory:

          qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

          One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

          The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

          Test the time command:

          $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

          It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

          It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

          The walltime can be specified in a job scripts as:

          #PBS -l walltime=3:00:00:00\n

          or on the command line

          qsub -l walltime=3:00:00:00\n

          It is recommended to always specify the walltime for a job.

          "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

          In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

          "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

          The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

          $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

          Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

          It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

          On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

          "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

          To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

          top

          provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

          htop

          is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

          "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

          Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

          The maximum amount of physical memory used by the job per node can be specified in a job script as:

          #PBS -l mem=4gb\n

          or on the command line

          qsub -l mem=4gb\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

          Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

          "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

          The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

          The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

          $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

          Or if you want to see it in a more readable format, execute:

          $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

          Note

          Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

          In order to specify the number of nodes and the number of processors per node in your job script, use:

          #PBS -l nodes=N:ppn=M\n

          or with equivalent parameters on the command line

          qsub -l nodes=N:ppn=M\n

          This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

          You can also use this statement in your job script:

          #PBS -l nodes=N:ppn=all\n

          to request all cores of a node, or

          #PBS -l nodes=N:ppn=half\n

          to request half of them.

          Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

          This could also be monitored with the htop command:

          htop\n
          Example output:
            1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

          The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

          If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
          2. Develop your parallel program in a smart way.
          3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          4. Correct your request for CPUs in your job script.
          "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

          On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

          The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

          The load averages differ from CPU percentage in two significant ways:

          1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
          2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
          "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

          What is the \"optimal load\" rule of thumb?

          The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

          In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

          Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

          The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

          1. When you are running computational intensive applications, one application per processor will generate the optimal load.
          2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

          The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

          The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

          The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

          The uptime command will show us the average load

          $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

          Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

          $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
          You can also read it in the htop command.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Profile your software to improve its performance.
          2. Configure your software (e.g., to exactly use the available amount of processors in a node).
          3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
          4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          5. Correct your request for CPUs in your job script.

          And then check again.

          "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

          Some programs generate intermediate or output files, the size of which may also be a useful metric.

          Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

          It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

          Several actions can be taken, to avoid storage problems:

          1. Be aware of all the files that are generated by your program. Also check out the hidden files.
          2. Check your quota consumption regularly.
          3. Clean up your files regularly.
          4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
          5. Make sure your programs clean up their temporary files after execution.
          6. Move your output results to your own computer regularly.
          7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
          "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

          Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

          Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

          The parameter to add in your job script would be:

          #PBS -l ib\n

          If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

          #PBS -l gbe\n
          "}, {"location": "getting_started/", "title": "Getting Started", "text": "

          Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

          In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

          Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

          "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

          To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

          If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

          "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
          1. Connect to the login nodes
          2. Transfer your files to the HPC-UGent infrastructure
          3. Optional: compile your code and test it
          4. Create a job script and submit your job
          5. Wait for job to be executed
          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

          "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

          There are two options to connect

          • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
          • Using the web portal

          Considering your operating system is Linux, it is recommended to make use of the ssh command in a terminal to get the most flexibility.

          Assuming you have already generated SSH keys in the previous step (Getting Access), and that they are in a default location, you should now be able to login by running the following command:

          ssh vsc40000@login.hpc.ugent.be\n

          User your own VSC account id

          Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

          Tip

          You can also still use the web portal (see shell access on web portal)

          Info

          When having problems see the connection issues section on the troubleshooting page.

          "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

          Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

          Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

          On your local machine you can run:

          curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

          Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

          scp tensorflow_mnist.py run.sh vsc40000login.hpc.ugent.be:~\n

          ssh  vsc40000@login.hpc.ugent.be\n

          User your own VSC account id

          Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

          Info

          For more information about transfering files or scp, see tranfer files from/to hpc.

          When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

          $ ls ~\nrun.sh tensorflow_mnist.py\n

          When you do not see these files, make sure you uploaded the files to your home directory.

          "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

          Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

          A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

          Our job script looks like this:

          run.sh

          #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
          As you can see this job script will run the Python script named tensorflow_mnist.py.

          The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

          module swap cluster/donphan\n

          Tip

          When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

          To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

          $ qsub run.sh\n123456\n

          This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

          Make sure you understand what the module command does

          Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

          It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

          When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

          For detailed information about module commands, read the running batch jobs chapter.

          "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

          Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

          You can get an overview of the active jobs using the qstat command:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

          Eventually, after entering qstat again you should see that your job has started running:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

          If you don't see your job in the output of the qstat command anymore, your job has likely completed.

          Read this section on how to interpret the output.

          "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

          When your job finishes it generates 2 output files:

          • One for normal output messages (stdout output channel).
          • One for warning and error messages (stderr output channel).

          By default located in the directory where you issued qsub.

          Info

          For more information about the stdout and stderr output channels, see this section.

          In our example when running ls in the current directory you should see 2 new files:

          • run.sh.o123456, containing normal output messages produced by job 123456;
          • run.sh.e123456, containing errors and warnings produced by job 123456.

          Info

          run.sh.e123456 should be empty (no errors or warnings).

          Use your own job ID

          Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

          When examining the contents of run.sh.o123456 you will see something like this:

          Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

          Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

          Warning

          When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

          For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

          "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
          • Running interactive jobs
          • Running jobs with input/output data
          • Multi core jobs/Parallel Computing
          • Interactive and debug cluster

          For more examples see Program examples and Job script examples

          "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

          module swap cluster/joltik\n

          To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

          module swap cluster/accelgor\n

          Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

          "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

          To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

          Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

          "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

          See https://www.ugent.be/hpc/en/infrastructure.

          "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

          There are 2 main ways to ask for GPUs as part of a job:

          • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

          • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

          Some background:

          • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

          • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

          "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

          Some important attention points:

          • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

          • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

          • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

          • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

          "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

          Use module avail to check for centrally installed software.

          The subsections below only cover a couple of installed software packages, more are available.

          "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

          Please consult module avail GROMACS for a list of installed versions.

          "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

          Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

          Please consult module avail Horovod for a list of installed versions.

          Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

          At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

          "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

          Please consult module avail PyTorch for a list of installed versions.

          "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

          Please consult module avail TensorFlow for a list of installed versions.

          Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

          "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
          #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
          "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

          Please consult module avail AlphaFold for a list of installed versions.

          For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

          "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

          In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

          "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

          The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

          This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

          Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

          • Interactive jobs (see chapter\u00a0Running interactive jobs)

          • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

          • Jobs requiring few resources

          • Debugging programs

          • Testing and debugging job scripts

          "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

          module swap cluster/donphan\n

          Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

          "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

          Some limits are in place for this cluster:

          • each user may have at most 5 jobs in the queue (both running and waiting to run);

          • at most 3 jobs per user can be running at the same time;

          • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

          In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

          Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

          "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

          Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

          All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

          "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

          \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

          While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

          A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

          The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

          Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

          Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

          "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

          The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

          The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

          The HPC currently consists of:

          a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

          Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

          "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

          The HPC infrastructure is not a magic computer that automatically:

          1. runs your PC-applications much faster for bigger problems;

          2. develops your applications;

          3. solves your bugs;

          4. does your thinking;

          5. ...

          6. allows you to play games even faster.

          The HPC does not replace your desktop computer.

          "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

          Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

          It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

          "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

          In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

          Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

          "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

          Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

          Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

          The two parallel programming paradigms most used in HPC are:

          • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

          • MPI for distributed memory systems (multiprocessing): on multiple nodes

          Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

          "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

          Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

          It is perfectly possible to also run purely sequential programs on the HPC.

          Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

          "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

          You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

          Supported and commonly used compilers are GCC and Intel.

          Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

          "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

          All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

          A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

          "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

          A typical workflow looks like:

          1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

          2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

          3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

          4. Create a job script and submit your job (see Running batch jobs)

          5. Get some coffee and be patient:

            1. Your job gets into the queue

            2. Your job gets executed

            3. Your job finishes

          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

          When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

          Do not hesitate to contact the HPC staff for any help.

          1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

          "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

          This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

          • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

          • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

          • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

          • To use a situational parameter, remove one '#' at the beginning of the line.

          simple_jobscript.sh
          #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
          "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

          Here's an example of a single-core job script:

          single_core.sh
          #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
          1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

          2. A module for Python 3.6 is loaded, see also section Modules.

          3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

          4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

          5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

          "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

          Here's an example of a multi-core job script that uses mympirun:

          multi_core.sh
          #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

          An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

          "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

          If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

          This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

          timeout.sh
          #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

          The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

          example_program.sh
          #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
          "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

          A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

          "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

          Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

          After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

          When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

          and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

          This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

          "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

          A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

          To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

          We can see all available versions of the SciPy module by using module avail SciPy-bundle:

          $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

          Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

          Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

          $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

          The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

          It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
          This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

          If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

          Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

          "}, {"location": "known_issues/", "title": "Known issues", "text": "

          This page provides details on a couple of known problems, and the workarounds that are available for them.

          If you have any questions related to these issues, please contact the HPC-UGent team.

          • Operation not permitted error for MPI applications
          "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

          When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

          Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

          This error means that an internal problem has occurred in OpenMPI.

          "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

          This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

          It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

          "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

          We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

          "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

          A workaround as been implemented in mympirun (version 5.4.0).

          Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

          module load vsc-mympirun\n

          and launch your MPI application using the mympirun command.

          For more information, see the mympirun documentation.

          "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

          If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

          export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
          "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

          We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

          "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

          There are two important motivations to engage in parallel programming.

          1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

          2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

          On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

          There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

          Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

          Tip

          You can request more nodes/cores by adding following line to your run script.

          #PBS -l nodes=2:ppn=10\n
          This queues a job that claims 2 nodes and 10 cores.

          Warning

          Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

          Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

          This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

          Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

          Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

          Go to the example directory:

          cd ~/examples/Multi-core-jobs-Parallel-Computing\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Study the example first:

          T_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

          And compile it (whilst including the thread library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Now, run it on the cluster and check the output:

          $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Tip

          If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

          OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

          An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

          Here is the general code structure of an OpenMP program:

          #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

          "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

          By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

          "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

          Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

          omp1.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

          Now run it in the cluster and check the result again.

          $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
          "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

          Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

          omp2.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

          Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

          omp3.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

          There are a host of other directives you can issue using OpenMP.

          Some other clauses of interest are:

          1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

          2. nowait: threads will not wait until everybody is finished

          3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

          4. if: allows you to parallelise only if a certain condition is met

          5. ...\u00a0and a host of others

          Tip

          If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

          The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

          In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

          The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

          The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

          One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

          Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

          Study the MPI-programme and the PBS-file:

          mpi_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
          mpi_hello.pbs
          #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

          and compile it:

          $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

          mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

          Run the parallel program:

          $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

          The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

          MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

          Tip

          mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

          You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

          Tip

          If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

          "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

          A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

          Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

          These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

          One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

          The \"Worker framework\" has been developed to address this issue.

          It can handle many small jobs determined by:

          parameter variations

          i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

          job arrays

          i.e., each individual job got a unique numeric identifier.

          Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

          However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

          "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/par_sweep\n

          Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

          $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

          For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

          par_sweep/weather
          #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

          A job script that would run this as a job for the first parameters (p01) would then look like:

          par_sweep/weather_p01.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

          When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

          To submit the job, the user would use:

           $ qsub weather_p01.pbs\n
          However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

          $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

          It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

          In order to make our PBS generic, the PBS file can be modified as follows:

          par_sweep/weather.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

          Note that:

          1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

          2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

          3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

          The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

          The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

          $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

          Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

          Warning

          When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

          module swap env/slurm/donphan\n

          instead of

          module swap cluster/donphan\n
          We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

          "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/job_array\n

          As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

          The following bash script would submit these jobs all one by one:

          #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

          This, as said before, could be disturbing for the job scheduler.

          Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

          Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

          The details are

          1. a job is submitted for each number in the range;

          2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

          3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

          The job could have been submitted using:

          qsub -t 1-100 my_prog.pbs\n

          The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

          To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

          A typical job script for use with job arrays would look like this:

          job_array/job_array.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

          Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

          $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

          For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

          job_array/test_set
          #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

          Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

          job_array/test_set.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          Note that

          1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

          2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

          The job is now submitted as follows:

          $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

          The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

          Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

          $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
          "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

          Often, an embarrassingly parallel computation can be abstracted to three simple steps:

          1. a preparation phase in which the data is split up into smaller, more manageable chunks;

          2. on these chunks, the same algorithm is applied independently (these are the work items); and

          3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

          The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

          cd ~/examples/Multi-job-submission/map_reduce\n

          The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

          First study the scripts:

          map_reduce/pre.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
          map_reduce/post.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

          Then one can submit a MapReduce style job as follows:

          $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

          Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

          "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

          The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

          The \"Worker Framework\" will be effective when

          1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

          2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

          "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

          Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

          tail -f run.pbs.log123456\n

          Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

          watch -n 60 wsummarize run.pbs.log123456\n

          This will summarise the log file every 60 seconds.

          "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

          Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

          Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

          Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

          "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

          Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

          wresume -jobid 123456\n

          This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

          wresume -l walltime=1:30:00 -jobid 123456\n

          Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

          wresume -jobid 123456 -retry\n

          By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

          "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

          This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

          $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
          "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

          When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

          to check for the available versions of worker, use the following command:

          $ module avail worker\n
          1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

          "}, {"location": "mympirun/", "title": "Mympirun", "text": "

          mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

          In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

          "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

          Before using mympirun, we first need to load its module:

          module load vsc-mympirun\n

          As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

          The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

          For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

          "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

          There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

          By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

          "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

          This is the most commonly used option for controlling the number of processing.

          The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

          $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
          "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

          There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

          See vsc-mympirun README for a detailed explanation of these options.

          "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

          You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

          $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
          "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

          In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

          "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

          There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

          • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

            • see also http://openfoam.com/history/
          • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

            • see also https://openfoam.org/download/history/
          • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

          Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

          "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

          The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

          • OpenFOAM websites:

            • https://openfoam.com

            • https://openfoam.org

            • http://wikki.gridcore.se/foam-extend

          • OpenFOAM user guides:

            • https://www.openfoam.com/documentation/user-guide

            • https://cfd.direct/openfoam/user-guide/

          • OpenFOAM C++ source code guide: https://cpp.openfoam.org

          • tutorials: https://wiki.openfoam.com/Tutorials

          • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

          Other useful OpenFOAM documentation:

          • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

          • http://www.dicat.unige.it/guerrero/openfoam.html

          "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

          To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

          "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

          First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

          $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

          To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

          To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

          module load OpenFOAM/11-foss-2023a\n
          "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

          OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

          source $FOAM_BASH\n
          "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

          If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

          source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

          Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

          "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

          If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

          unset $FOAM_SIGFPE\n

          Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

          As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

          "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

          The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

          • generate the mesh;

          • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

          After running the simulation, some post-processing steps are typically performed:

          • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

          • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

          Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

          Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

          One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

          For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

          "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

          For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

          "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

          When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

          You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

          "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

          It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

          See Basic usage for how to get started with mympirun.

          To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

          export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

          Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

          "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

          To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

          Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

          number of processor directories = 4 is not equal to the number of processors = 16\n

          In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

          • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

          • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

          See Controlling number of processes to control the number of process mympirun will start.

          This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

          To visualise the processor domains, use the following command:

          mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

          and then load the VTK files generated in the VTK folder into ParaView.

          "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

          OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

          Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

          • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

          • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

          • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

          • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

          • if the results per individual time step are large, consider setting writeCompression to true;

          For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

          For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

          These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

          "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

          See https://cfd.direct/openfoam/user-guide/compiling-applications/.

          "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

          Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

          OpenFOAM_damBreak.sh
          #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
          "}, {"location": "program_examples/", "title": "Program examples", "text": "

          If you have not done so already copy our examples to your home directory by running the following command:

           cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          ~(tilde) refers to your home directory, the directory you arrive by default when you login.

          Go to our examples:

          cd ~/examples/Program-examples\n

          Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

          1. 01_Python

          2. 02_C_C++

          3. 03_Matlab

          4. 04_MPI_C

          5. 05a_OMP_C

          6. 05b_OMP_FORTRAN

          7. 06_NWChem

          8. 07_Wien2k

          9. 08_Gaussian

          10. 09_Fortran

          11. 10_PQS

          The above 2 OMP directories contain the following examples:

          C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

          Compile by any of the following commands:

          Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

          Be invited to explore the examples.

          "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

          Remember to substitute the usernames, login nodes, file names, ...for your own.

          Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

          Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

          "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

          Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

          This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

          It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

          "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

          As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

          For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

          $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

          Initially there will be only one RHEL 9 login node. As needed a second one will be added.

          When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

          "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

          To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

          This includes (per user):

          • max. of 2 CPU cores in use
          • max. 8 GB of memory in use

          For more intensive tasks you can use the interactive and debug clusters through the web portal.

          "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

          The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

          However, there will be impact on the availability of software that is made available via modules.

          Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

          This includes all software installations on top of a compiler toolchain that is older than:

          • GCC(core)/12.3.0
          • foss/2023a
          • intel/2023a
          • gompi/2023a
          • iimpi/2023a
          • gfbf/2023a

          (or another toolchain with a year-based version older than 2023a)

          The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

          foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

          If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

          It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

          "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

          We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

          cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

          Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

          We will keep this page up to date when more specific dates have been planned.

          Warning

          This planning below is subject to change, some clusters may get migrated later than originally planned.

          Please check back regularly.

          "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

          If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

          "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

          In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

          When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

          The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

          "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

          Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

          "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

          The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

          All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

          "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

          In order to administer the active software and their environment variables, the module system has been developed, which:

          1. Activates or deactivates software packages and their dependencies.

          2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

          3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

          4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

          5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

          This is all managed with the module command, which is explained in the next sections.

          There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

          "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

          A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

          module available\n

          It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

          This will give some output such as:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          This gives a full list of software packages that can be loaded.

          The casing of module names is important: lowercase and uppercase letters matter in module names.

          "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

          The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

          Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

          E.g., foss/2024a is the first version of the foss toolchain in 2024.

          The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

          "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

          To \"activate\" a software package, you load the corresponding module file using the module load command:

          module load example\n

          This will load the most recent version of example.

          For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

          **However, you should specify a particular version to avoid surprises when newer versions are installed:

          module load secondexample/2.7-intel-2016b\n

          The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

          Modules need not be loaded one by one; the two module load commands can be combined as follows:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

          "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

          Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

          $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          You can also just use the ml command without arguments to list loaded modules.

          It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

          "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

          To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

          $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

          To unload the secondexample module, you can also use ml -secondexample.

          Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

          "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

          In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

          module purge\n
          This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

          "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

          Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

          Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

          module load example\n

          rather than

          module load example/1.2.3\n

          Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

          Consider the following example modules:

          $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

          Let's now generate a version conflict with the example module, and see what happens.

          $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

          Note: A module swap command combines the appropriate module unload and module load commands.

          "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

          With the module spider command, you can search for modules:

          $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

          It's also possible to get detailed information about a specific module:

          $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
          "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

          To get a list of all possible commands, type:

          module help\n

          Or to get more information about one specific module package:

          $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
          "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

          If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

          In each module command shown below, you can replace module with ml.

          First, load all modules you want to include in the collections:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          Now store it in a collection using module save. In this example, the collection is named my-collection.

          module save my-collection\n

          Later, for example in a jobscript or a new session, you can load all these modules with module restore:

          module restore my-collection\n

          You can get a list of all your saved collections with the module savelist command:

          $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

          To get a list of all modules a collection will load, you can use the module describe command:

          $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          To remove a collection, remove the corresponding file in $HOME/.lmod.d:

          rm $HOME/.lmod.d/my-collection\n
          "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

          To see how a module would change the environment, you can use the module show command:

          $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

          It's also possible to use the ml show command instead: they are equivalent.

          Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

          You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

          If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

          "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

          To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

          "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

          You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

          You can also get this information in text form (per cluster separately) with the pbsmon command:

          $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

          pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

          "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

          Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

          As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

          Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

          cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          First go to the directory with the first examples by entering the command:

          cd ~/examples/Running-batch-jobs\n

          Each time you want to execute a program on the HPC you'll need 2 things:

          The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

          A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

          1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

          Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

          List and check the contents with:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

          In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

          1. The Perl script calculates the first 30 Fibonacci numbers.

          2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

          We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

          On the command line, you would run this using:

          $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

          Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

          The job script contains a description of the job by specifying the command that need to be executed on the compute node:

          fibo.pbs
          #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

          $ qsub fibo.pbs\n123456\n

          The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

          Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

          To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

          Your job is now waiting in the queue for a free workernode to start on.

          Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

          After your job was started, and ended, check the contents of the directory:

          $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

          Explore the contents of the 2 new files:

          $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

          These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

          "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

          In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

          The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

          • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

          • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

          • Time waiting in queue: queued jobs get a higher priority over time.

          • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

          • Whether or not you are a member of a Virtual Organisation (VO).

            Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

            If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

            See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

          Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

          It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

          Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

          "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

          To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

          By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

          module swap cluster/donphan\n

          Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

          To list the available cluster modules, you can use the module avail cluster/ command:

          $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

          As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

          The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

          "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

          It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

          To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

          Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

          env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

          We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

          We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

          "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

          Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

          qstat 12345\n

          To show on which compute nodes your job is running, at least, when it is running:

          qstat -n 12345\n

          To remove a job from the queue so that it will not run, or to stop a job that is already running.

          qdel 12345\n

          When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

          $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

          Here:

          Job ID the job's unique identifier

          Name the name of the job

          User the user that owns the job

          Time Use the elapsed walltime for the job

          Queue the queue the job is in

          The state S can be any of the following:

          State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

          User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

          "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

          There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

          "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

          Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

          It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

          "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

          The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

          qsub -l walltime=2:30:00 ...\n

          For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

          The maximum walltime for HPC-UGent clusters is 72 hours.

          If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

          qsub -l mem=4gb ...\n

          The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

          The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

          qsub -l nodes=5:ppn=2 ...\n

          The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

          qsub -l nodes=1:westmere\n

          The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

          These options can either be specified on the command line, e.g.

          qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

          or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          Note that the resources requested on the command line will override those specified in the PBS file.

          "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

          At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

          When you navigate to that directory and list its contents, you should see them:

          $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

          In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

          Inspect the generated output and error files:

          $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
          "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

          You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

          #PBS -m b \n#PBS -m e \n#PBS -m a\n

          or

          #PBS -m abe\n

          These options can also be specified on the command line. Try it and see what happens:

          qsub -m abe fibo.pbs\n

          The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

          qsub -m b -M john.smith@example.com fibo.pbs\n

          will send an e-mail to john.smith@example.com when the job begins.

          "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

          If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

          So the following example might go wrong:

          $ qsub job1.sh\n$ qsub job2.sh\n

          You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

          $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

          afterok means \"After OK\", or in other words, after the first job successfully completed.

          It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

          1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

          "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

          Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

          Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

          Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

          The syntax for qsub for submitting an interactive PBS job is:

          $ qsub -I <... pbs directives ...>\n
          "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

          Tip

          Find the code in \"~/examples/Running_interactive_jobs\"

          First of all, in order to know on which computer you're working, enter:

          $ hostname -f\ngligar07.gastly.os\n

          This means that you're now working on the login node gligar07.gastly.os of the cluster.

          The most basic way to start an interactive job is the following:

          $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

          There are two things of note here.

          1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

          2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

          In order to know on which compute-node you're working, enter again:

          $ hostname -f\nnode3501.doduo.gent.vsc\n

          Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

          Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

          $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

          You can exit the interactive session with:

          $ exit\n

          Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

          You can work for 3 hours by:

          qsub -I -l walltime=03:00:00\n

          If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

          "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

          To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

          An X Window server is packaged by default on most Linux distributions. If you have a graphical user interface this generally means that you are using an X Window server.

          The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

          "}, {"location": "running_interactive_jobs/#connect-with-x-forwarding", "title": "Connect with X-forwarding", "text": "

          In order to get the graphical output of your application (which is running on a compute node on the HPC) transferred to your personal screen, you will need to reconnect to the HPC with X-forwarding enabled, which is done with the \"-X\" option.

          First exit and reconnect to the HPC with X-forwarding enabled:

          $ exit\n$ ssh -X vsc40000@login.hpc.ugent.be\n$ hostname -f\ngligar07.gastly.os\n

          We first check whether our GUIs on the login node are decently forwarded to your screen on your local machine. An easy way to test it is by running a small X-application on the login node. Type:

          $ xclock\n

          And you should see a clock appearing on your screen.

          You can close your clock and connect further to a compute node with again your X-forwarding enabled:

          $ qsub -I -X\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n$ hostname -f\nnode3501.doduo.gent.vsc\n$ xclock\n

          and you should see your clock again.

          "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

          We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

          Now run the message program:

          cd ~/examples/Running_interactive_jobs\n./message.py\n

          You should see the following message appearing.

          Click any button and see what happens.

          -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
          "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

          You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

          "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

          First go to the directory:

          cd ~/examples/Running_jobs_with_input_output_data\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          ```

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

          List and check the contents with:

          $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

          Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

          file1.py
          #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

          The code of the Python script, is self explanatory:

          1. In step 1, we write something to the file hello.txt in the current directory.

          2. In step 2, we write some text to stdout.

          3. In step 3, we write to stderr.

          Check the contents of the first job script:

          file1a.pbs
          #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

          You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

          Submit it:

          qsub file1a.pbs\n

          After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

          $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

          Some observations:

          1. The file Hello.txt was created in the current directory.

          2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

          3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

          Inspect their contents ...\u00a0and remove the files

          $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

          Tip

          Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

          "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

          Check the contents of the job script and execute it.

          file1b.pbs
          #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

          Inspect the contents again ...\u00a0and remove the generated files:

          $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

          Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

          "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

          You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

          file1c.pbs
          #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
          "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

          The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

          Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

          The following locations are available:

          Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

          Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

          We elaborate more on the specific function of these locations in the following sections.

          Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

          For documentation about VO directories, see the section on VO directories.

          "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

          Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

          The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

          The operating system also creates a few files and folders here to manage your account. Examples are:

          File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

          In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

          The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

          If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

          "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

          To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

          You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

          Each type of scratch has its own use:

          Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

          Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

          Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

          Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

          "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

          In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

          $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

          Now you should be able to access your files running

          $ ls /UGent/yourugentusername\nhome shares www\n

          Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

          If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

          kinit yourugentusername@UGENT.BE -r 7\n

          You can verify your authentication ticket and expiry dates yourself by running klist

          $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

          Your ticket is valid for 10 hours, but you can renew it before it expires.

          To renew your tickets, simply run

          kinit -R\n

          If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

          krenew -b -K 60\n

          Each hour the process will check if your ticket should be renewed.

          We strongly advise to disable access to your shares once it is no longer needed:

          kdestroy\n

          If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

          conda deactivate\n
          "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

          In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

          $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

          Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

          $ ssh globus01 destroy\nSuccesfully destroyed session\n
          "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

          Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

          To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

          The rules are:

          1. You will only receive a warning when you have reached the soft limit of either quota.

          2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

          3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

          We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

          "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

          Tip

          Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

          In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

          1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

          2. repeat this action 30.000 times;

          3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

          $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

          Tip

          Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

          In this exercise, you will

          1. Generate the file \"primes_1.txt\" again as in the previous exercise;

          2. open the file;

          3. read it line by line;

          4. calculate the average of primes in the line;

          5. count the number of primes found per line;

          6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

          Check the Python and the PBS file, and submit the job:

          $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
          "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

          The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

          • the amount of disk space; and

          • the number of files

          that can be made available to each individual HPC user.

          The quota of disk space and number of files for each HPC user is:

          Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

          Tip

          The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

          Tip

          If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

          "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

          You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

          VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

          To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

          Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

          $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

          This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

          If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

          $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

          If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

          $ du -s\n5632 .\n$ du -s -h\n

          If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

          $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

          Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

          $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
          "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

          Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

          Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

          To change the group of a directory and it's underlying directories and files, you can use:

          chgrp -R groupname directory\n
          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
          1. Get the group name you want to belong to.

          2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
          1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

          2. Fill out the group name. This cannot contain spaces.

          3. Put a description of your group in the \"Info\" field.

          4. You will now be a member and moderator of your newly created group.

          "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

          Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

          "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

          You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

          $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

          We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

          "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

          A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

          "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
          1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

          2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

          3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

          "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
          1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

          2. Fill why you want to request a VO.

          3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

          4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

          5. If the request is approved, you will now be a member and moderator of your newly created VO.

          "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

          If you're a moderator of a VO, you can request additional quota for the VO and its members.

          1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

          2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

          3. Add a comment explaining why you need additional storage space and submit the form.

          4. An HPC administrator will review your request and approve or deny it.

          "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

          VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

          Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

          1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

          2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

          "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

          When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

          VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

          VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

          If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

          "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

          A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

          "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

          This section will explain how to create, activate, use and deactivate Python virtual environments.

          "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

          A Python virtual environment can be created with the following command:

          python -m venv myenv      # Create a new virtual environment named 'myenv'\n

          This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

          Warning

          When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

          "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

          To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

          source myenv/bin/activate                    # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

          After activating the virtual environment, you can install additional Python packages with pip install:

          pip install example_package1\npip install example_package2\n

          These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

          It is now possible to run Python scripts that use the installed packages in the virtual environment.

          Tip

          When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

          Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

          To check if a package is available as a module, use:

          module av package_name\n

          Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

          module show module_name\n

          to check which extensions are included in a module (if any).

          "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

          Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

          example.py
          import example_package1\nimport example_package2\n...\n
          python example.py\n
          "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

          When you are done using the virtual environment, you can deactivate it. To do that, run:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

          You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

          pytorch_poutyne.py
          import torch\nimport poutyne\n\n...\n

          We load a PyTorch package as a module and install Poutyne in a virtual environment:

          module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

          While the virtual environment is activated, we can run the script without any issues:

          python pytorch_poutyne.py\n

          Deactivate the virtual environment when you are done:

          deactivate\n
          "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

          To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

          module swap cluster/donphan\nqsub -I\n

          After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

          Naming a virtual environment

          When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

          python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
          "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

          This section will combine the concepts discussed in the previous sections to:

          1. Create a virtual environment on a specific cluster.
          2. Combine packages installed in the virtual environment with modules.
          3. Submit a job script that uses the virtual environment.

          The example script that we will run is the following:

          pytorch_poutyne.py
          import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

          First, we create a virtual environment on the donphan cluster:

          module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

          Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

          jobscript.pbs
          #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

          Next, we submit the job script:

          qsub jobscript.pbs\n

          Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

          "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

          Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

          For example, if we create a virtual environment on the skitty cluster,

          module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

          return to the login node by pressing CTRL+D and try to use the virtual environment:

          $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

          we are presented with the illegal instruction error. More info on this here

          "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

          When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

          python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

          Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

          "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

          There are two main reasons why this error could occur.

          1. You have not loaded the Python module that was used to create the virtual environment.
          2. You loaded or unloaded modules while the virtual environment was activated.
          "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

          If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

          The following commands illustrate this issue:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

          module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
          "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

          You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

          $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

          The solution is to only modify modules when not in a virtual environment.

          "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

          Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

          This documentation only covers aspects of using Singularity on the infrastructure.

          "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

          The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via .

          "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

          Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

          When you make Singularity or convert Docker images you have some restrictions:

          • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
          "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          ::: prompt :::

          Create a job script like:

          Create an example myscript.sh:

          ::: code bash #!/bin/bash

          # prime factors factor 1234567 :::

          "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          ::: prompt :::

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before singularity execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          ::: prompt :::

          For example to compile an MPI example:

          ::: prompt :::

          Example MPI job script:

          "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

          The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

          As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

          In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

          In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

          • Title and nickname
          • Start and end date for your course or training
          • VSC-ids of all teachers/trainers
          • Participants based on UGent Course Code and/or list of VSC-ids
          • Optional information
            • Additional storage requirements
              • Shared folder
              • Groups folder for collaboration
              • Quota
            • Reservation for resource requirements beyond the interactive cluster
            • Ticket number for specific software needed for your course/training
            • Details for a custom Interactive Application in the webportal

          In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

          Please make these requests well in advance, several weeks before the start of your course/workshop.

          "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

          The title of the course or training can be used in e.g. reporting.

          The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

          When choosing the nickname, try to make it unique, but this is not enforced nor checked.

          "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

          The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

          The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

          • Course group and subgroups will be deactivated
          • Residual data in the course directories will be archived or deleted
          • Custom Interactive Applications will be disabled
          "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

          A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

          This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

          Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

          "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

          The management of the list of students or participants depends if this is a UGent course or a training/workshop.

          "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

          Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

          The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

          Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

          A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

          "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

          (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

          "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

          For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

          This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

          Every course directory will always contain the folders:

          • input
            • ideally suited to distribute input data such as common datasets
            • moderators have read/write access
            • group members (students) only have read access
          • members
            • this directory contains a personal folder for every student in your course members/vsc<01234>
            • only this specific VSC-id will have read/write access to this folder
            • moderators have read access to this folder
          "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

          Optionally, we can also create these folders:

          • shared
            • this is a folder for sharing files between any and all group members
            • all group members and moderators have read/write access
            • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
          • groups
            • a number of groups/group_<01> folders are created under the groups folder
            • these folders are suitable if you want to let your students collaborate closely in smaller groups
            • each of these group_<01> folders are owned by a dedicated group
            • teachers are automatically made moderators of these dedicated groups
            • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
            • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

          If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

          • shared: yes
          • subgroups: <number of (sub)groups>
          "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

          There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

          • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
          • member quota (defaults 5 GB volume and 10k files) are per student/participant

          The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

          "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

          The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

          "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

          We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

          Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

          Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

          Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

          "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

          In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

          We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

          Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

          "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

          HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

          A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

          If you would like this for your course, provide more details in your teaching request, including:

          • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

          • which cluster you want to use

          • how many nodes/cores/GPUs are needed

          • which software modules you are loading

          • custom code you are launching (e.g. autostart a GUI)

          • required environment variables that you are setting

          • ...

          We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

          A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

          "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

          Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

          "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

          Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

          "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

          Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

          "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

          Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

          For example:

          $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

          "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

          Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

          Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

          See also the examples below.

          "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

          The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

          example.sh:

          #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

          Running the following command:

          $ qsub --dryrun example.sh -N example\n

          will generate this output:

          Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
          This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

          With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

          "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

          Similarly to the --dryrun example, we start by running the following command:

          $ qsub --debug example.sh -N example\n

          which generates this output:

          DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
          The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

          "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

          Below is a list of the most common and useful directives.

          Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

          TORQUE-related environment variables in batch job scripts.

          # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

          IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

          When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

          Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

          Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

          "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

          When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

          To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

          Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

          Other reasons why using more cores may not lead to a (significant) speedup include:

          • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

          • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

          • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

          • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

          • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

          • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

          More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

          "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

          When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

          Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

          Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

          Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

          An example of how you can make beneficial use of multiple nodes can be found here.

          You can also use MPI in Python, some useful packages that are also available on the HPC are:

          • mpi4py
          • Boost.MPI

          We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

          "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

          If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

          If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

          "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

          If you get from your job output an error message similar to this:

          =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

          This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

          "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

          Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

          Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

          "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

          If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

          If you have errors that look like:

          vsc40000@login.hpc.ugent.be: Permission denied\n

          or you are experiencing problems with connecting, here is a list of things to do that should help:

          1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

          2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

          3. Your SSH private key may not be in the default location ($HOME/.ssh/id_rsa). There are several ways to deal with this (using one of these is sufficient):

            1. Use the ssh -i (see section Connect) OR;
            2. Use ssh-add (see section Using an SSH agent) OR;
            3. Specify the location of the key in $HOME/.ssh/config. You will need to replace the VSC login id in the User field with your own:
              Host hpcugent\n    Hostname login.hpc.ugent.be\n    IdentityFile /path/to/private/key\n    User vsc40000\n
              Now you can connect with ssh hpcugent.
          4. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

          5. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

          6. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

          7. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

          8. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

          If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

          Please add -vvv as a flag to ssh like:

          ssh -vvv vsc40000@login.hpc.ugent.be\n

          and include the output of that command in the message.

          "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

          If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \n@     WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! \nSomeone could be\neavesdropping on you right now (man-in-the-middle attack)! \nIt is also possible that a host key has just been changed. \nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:1MNKFTfl1T9sm6tTWAo4sn7zyEfiWFLKbk/mlT+7S5s. \nPlease contact your system administrator. \nAdd correct host key in \u00a0~/.ssh/known_hosts to get rid of this message. \nOffending ECDSA key in \u00a0~/.ssh/known_hosts:21\nECDSA host key for login.hpc.ugent.be has changed and you have requested strict checking.\nHost key verification failed.\n

          You will need to remove the line it's complaining about (in the example, line 21). To do that, open ~/.ssh/known_hosts in an editor, and remove the line. This results in ssh \"forgetting\" the system you are connecting to.

          Alternatively you can use the command that might be shown by the warning under remove with: and it should be something like this:

          ssh-keygen -f \"~/.ssh/known_hosts\" -R \"login.hpc.ugent.be\"\n

          If the command is not shown, take the file from the \"Offending ECDSA key in\", and the host name from \"ECDSA host key for\" lines.

          After you've done that, you'll need to connect to the HPC again. See Warning message when first connecting to new host to verify the fingerprints.

          "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

          If you get errors like:

          $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

          or

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

          It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

          "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "
          $ ssh vsc40000@login.hpc.ugent.be\nThe authenticity of host login.hpc.ugent.be (<IP-adress>) can't be established. \n<algorithm> key fingerprint is <hash>\nAre you sure you want to continue connecting (yes/no)?\n

          Now you can check the authenticity by checking if the line that is at the place of the underlined piece of text matches one of the following lines:

          RSA key fingerprint is 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\nRSA key fingerprint is SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\nECDSA key fingerprint is e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\nECDSA key fingerprint is SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\nED25519 key fingerprint is 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\nED25519 key fingerprint is SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n

          If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

          "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

          To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

          Note

          Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

          "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

          If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

          Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

          You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

          "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

          See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

          "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

          Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

          $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

          This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

          To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

          As a rule of thumb, toolchains in the same row are compatible with each other:

          GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

          Example

          we could load the following modules together:

          ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

          Another common error is:

          $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

          This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

          "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

          When running software provided through modules (see Modules), you may run into errors like:

          $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

          or errors like:

          $ python\nIllegal instruction\n

          When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

          If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

          If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

          $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

          This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

          "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

          When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

          $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

          When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

          The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

          Tip

          To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

          $ module swap env/slurm/donphan \n
          instead of
          $ module swap cluster/donphan \n

          We recommend using a module swap cluster command after submitting the jobs.

          This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

          "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

          All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

          vsc40000@ln01[203] $\n

          When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

          Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

          Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

          $ echo This is a test\nThis is a test\n

          Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

          More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

          $ ls --help \n$ man ls\n$ info ls\n

          (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

          "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

          In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

          Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

          Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

          echo \"Hello! This is my hostname:\" \nhostname\n

          You can type both lines at your shell prompt, and the result will be the following:

          $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

          Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

          nano foo\n

          or use the following commands:

          echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

          The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

          $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

          Congratulations, you just created and started your first shell script!

          A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

          You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

          $ which bash\n/bin/bash\n

          We edit our script and change it with this information:

          #!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

          Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

          Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

          chmod +x foo\n

          Now you can start your script by simply executing it:

          $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

          The same technique can be used for all other scripting languages, like Perl and Python.

          Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

          "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

          The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

          Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

          To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

          Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

          Through this web portal, you can:

          • browse through the files & directories in your VSC account, and inspect, manage or change them;

          • consult active jobs (across all HPC-UGent Tier-2 clusters);

          • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

          • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

          • open a terminal session directly in your web browser;

          More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

          "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

          All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

          "}, {"location": "web_portal/#login", "title": "Login", "text": "

          When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

          "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

          The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

          Please click \"Authorize\" here.

          This request will only be made once, you should not see this again afterwards.

          "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

          Once logged in, you should see this start page:

          This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

          If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

          "}, {"location": "web_portal/#features", "title": "Features", "text": "

          We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

          "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

          Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

          The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

          Here you can:

          • Click a directory in the tree view on the left to open it;

          • Use the buttons on the top to:

            • go to a specific subdirectory by typing in the path (via Go To...);

            • open the current directory in a terminal (shell) session (via Open in Terminal);

            • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

            • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

            • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

            • show the owner and permissions in the file listing (via Show Owner/Mode);

          • Double-click a directory in the file listing to open that directory;

          • Select one or more files and/or directories in the file listing, and:

            • use the View button to see the contents (use the button at the top right to close the resulting popup window);

            • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

            • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

            • use the Download button to download the selected files and directories from your VSC account to your local workstation;

            • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

            • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

            • use the Delete button to (permanently!) remove the selected files and directories;

          For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

          "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

          Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

          For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

          "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

          To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

          A new browser tab will be opened that shows all your current queued and/or running jobs:

          You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

          Jobs that are still queued or running can be deleted using the red button on the right.

          Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

          For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

          "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

          To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

          This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

          You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

          Don't forget to actually submit your job to the system via the green Submit button!

          "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

          In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

          "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

          Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

          Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

          To exit the shell session, type exit followed by Enter and then close the browser tab.

          Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

          "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

          To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

          You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

          Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

          To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

          "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

          See dedicated page on Jupyter notebooks

          "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

          In case of problems with the web portal, it could help to restart the web server running in your VSC account.

          You can do this via the Restart Web Server button under the Help menu item:

          Of course, this only affects your own web portal session (not those of others).

          "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
          • ABAQUS for CAE course
          "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

          X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

          1. A graphical remote desktop that works well over low bandwidth connections.

          2. Copy/paste support from client to server and vice-versa.

          3. File sharing from client to server.

          4. Support for sound.

          5. Printer sharing from client to server.

          6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

          "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

          X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

          X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

          "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

          After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

          There are two ways to connect to the login node:

          • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

          • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

          "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

          This is the easier way to setup X2Go, a direct connection to the login node.

          1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

          2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

          3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

          4. Set the SSH port (22 by default).

          5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

            1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

          6. Check \"Try autologin\" option.

          7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

            1. [optional]: Set a single application like Terminal instead of XFCE desktop.

          8. [optional]: Change the session icon.

          9. Click the OK button after these changes.

          "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

          This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

          1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

          2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

          3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

            1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

            2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

            3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

            4. Click the OK button after these changes.

          "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

          Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

          X2Go will keep the session open for you (but only if the login node is not rebooted).

          "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

          If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

          hostname\n

          This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

          "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

          If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

          "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

          The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

          To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

          Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

          After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

          Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

          "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

          TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

          Loads MNIST datasets and trains a neural network to recognize hand-written digits.

          Runtime: ~1 min. on 8 cores (Intel Skylake)

          See https://www.tensorflow.org/tutorials/quickstart/beginner

          "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

          Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

          These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

          The guide aims to make you familiar with the Linux command line environment quickly.

          The tutorial goes through the following steps:

          1. Getting Started
          2. Navigating
          3. Manipulating files and directories
          4. Uploading files
          5. Beyond the basics

          Do not forget Common pitfalls, as this can save you some troubleshooting.

          "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
          • More on the HPC infrastructure.
          • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
          "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

          Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

          "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

          To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

          First, it's important to make a distinction between two different output channels:

          1. stdout: standard output channel, for regular output

          2. stderr: standard error channel, for errors and warnings

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

          > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

          $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

          >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

          $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

          < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

          One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

          $ find . -name .txt > files\n$ xargs grep banana < files\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

          To redirect the stderr output (warnings, messages), you can use 2>, just like >

          $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

          To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

          $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

          Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

          $ ls | wc -l\n    42\n

          A common pattern is to pipe the output of a command to less so you can examine or search the output:

          $ find . | less\n

          Or to look through your command history:

          $ history | less\n

          You can put multiple pipes in the same line. For example, which cp commands have we run?

          $ history | grep cp | less\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

          The shell will expand certain things, including:

          1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

          2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

          3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

          4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

          "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

          ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

          $ ps -fu $USER\n

          To see all the processes:

          $ ps -elf\n

          To see all the processes in a forest view, use:

          $ ps auxf\n

          The last two will spit out a lot of data, so get in the habit of piping it to less.

          pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

          pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

          "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

          ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

          $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

          Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

          $ kill -9 1234\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

          top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

          To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

          There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

          To exit top, use q (for 'quit').

          For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

          "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

          ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

          $ ulimit -a\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

          To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

          $ wc example.txt\n      90     468     3189   example.txt\n

          The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

          To only count the number of lines, use wc -l:

          $ wc -l example.txt\n      90    example.txt\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

          grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

          $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

          grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

          "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

          cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

          $ cut -f 1 -d ',' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

          sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

          $ sed 's/oldtext/newtext/g' myfile.txt\n

          By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

          "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

          awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

          First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

          $ awk '{print $4}' mydata.dat\n

          You can use -F ':' to change the delimiter (F for field separator).

          The next example is used to sum numbers from a field:

          $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

          The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

          However, there are some rules you need to abide by.

          Here is a very detailed guide should you need more information.

          "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

          The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

          #!/bin/sh\n
          #!/bin/bash\n
          #!/usr/bin/env bash\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

          Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

          if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
          Or only if a certain variable is bigger than one:
          if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
          Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

          In the initial example, we used -d to test if a directory existed. There are several more checks.

          Another useful example, is to test if a variable contains a value (so it's not empty):

          if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

          the -z will check if the length of the variable's value is greater than zero.

          "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

          Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

          Let's look at a simple example:

          for i in 1 2 3\ndo\necho $i\ndone\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

          Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

          CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

          In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

          "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

          Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

          Firstly a useful thing to know for debugging and testing is that you can run any command like this:

          command 2>&1 output.log   # one single output file, both output and errors\n

          If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

          If you want regular and error output separated you can use:

          command > output.log 2> output.err  # errors in a separate file\n

          this will write regular output to output.log and error output to output.err.

          You can then look for the errors with less or search for specific text with grep.

          In scripts, you can use:

          set -e\n

          This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

          "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

          Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

          command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

          "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

          If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

          Examples include:

          • modifying your $PS1 (to tweak your shell prompt)

          • printing information about the current/jobs environment (echoing environment variables, etc.)

          • selecting a specific cluster to run on with module swap cluster/...

          Some recommendations:

          • Avoid using module load statements in your $HOME/.bashrc file

          • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

          When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

          "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
          "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

          The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

          This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

          #PBS -l nodes=1:ppn=1 # single-core\n

          For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

          #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

          We intend to submit it on the long queue:

          #PBS -q long\n

          We request a total running time of 48 hours (2 days).

          #PBS -l walltime=48:00:00\n

          We specify a desired name of our job:

          #PBS -N FreeSurfer_per_subject-time-longitudinal\n
          This specifies mail options:
          #PBS -m abe\n

          1. a means mail is sent when the job is aborted.

          2. b means mail is sent when the job begins.

          3. e means mail is sent when the job ends.

          Joins error output with regular output:

          #PBS -j oe\n

          All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

          "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
          1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

          2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

          3. How many files and directories are in /tmp?

          4. What's the name of the 5th file/directory in alphabetical order in /tmp?

          5. List all files that start with t in /tmp.

          6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

          7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

          "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

          This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

          "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

          If you receive an error message which contains something like the following:

          No such file or directory\n

          It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

          Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

          "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

          Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

          $ cat some file\nNo such file or directory 'some'\n

          Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

          $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

          This is especially error-prone if you are piping results of find:

          $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

          This can be worked around using the -print0 flag:

          $ find . -type f -print0 | xargs -0 cat\n...\n

          But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

          "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

          If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

          $ rm -r ~/$PROJETC/*\n

          "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

          A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

          $ #rm -r ~/$POROJETC/*\n
          Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

          "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
          $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

          Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

          $ chmod +x script_name.sh\n

          "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

          If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

          If you need help about a certain command, you should consult its so-called \"man page\":

          $ man command\n

          This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

          Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

          "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
          1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

          2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

          3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

          4. basic shell usage

          5. Bash for beginners

          6. MOOC

          Please don't hesitate to contact in case of questions or problems.

          "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

          To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

          You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

          Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

          "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

          To get help:

          1. use the documentation available on the system, through the help, info and man commands (use q to exit).
            help cd \ninfo ls \nman cp \n
          2. use Google

          3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

          "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

          Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

          "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

          The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

          You use the shell by executing commands, and hitting <enter>. For example:

          $ echo hello \nhello \n

          You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

          To go through previous commands, use <up> and <down>, rather than retyping them.

          "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

          A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

          $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

          "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

          If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

          "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

          At the prompt we also have access to shell variables, which have both a name and a value.

          They can be thought of as placeholders for things we need to remember.

          For example, to print the path to your home directory, we can use the shell variable named HOME:

          $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

          This prints the value of this variable.

          "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

          There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

          For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

          $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

          You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

          $ env | sort | grep VSC\n

          But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

          $ export MYVARIABLE=\"value\"\n

          It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

          If we then do

          $ echo $MYVARIABLE\n

          this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

          "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

          You can change what your prompt looks like by redefining the special-purpose variable $PS1.

          For example: to include the current location in your prompt:

          $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

          Note that ~ is short representation of your home directory.

          To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

          $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

          "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

          One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

          This may lead to surprising results, for example:

          $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

          To understand what's going on here, see the section on cd below.

          The moral here is: be very careful to not use empty variables unintentionally.

          Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

          The -e option will result in the script getting stopped if any command fails.

          The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

          More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

          "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

          If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

          "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

          Basic information about the system you are logged into can be obtained in a variety of ways.

          We limit ourselves to determining the hostname:

          $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

          And querying some basic information about the Linux kernel:

          $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

          "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
          • Print the full path to your home directory
          • Determine the name of the environment variable to your personal scratch directory
          • What's the name of the system you\\'re logged into? Is it the same for everyone?
          • Figure out how to print the value of a variable without including a newline
          • How do you get help on using the man command?

          Next chapter teaches you on how to navigate.

          "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

          Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

          If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

          Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

          To figure out where your quota is being spent, the du (isk sage) command can come in useful:

          $ du -sh test\n59M test\n

          Do not (frequently) run du on directories where large amounts of data are stored, since that will:

          1. take a long time

          2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

          Software is provided through so-called environment modules.

          The most commonly used commands are:

          1. module avail: show all available modules

          2. module avail <software name>: show available modules for a specific software name

          3. module list: show list of loaded modules

          4. module load <module name>: load a particular module

          More information is available in section Modules.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

          The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

          Detailed information is available in section submitting your job.

          "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

          Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

          Hint: python -c \"print(sum(range(1, 101)))\"

          • How many modules are available for Python version 3.6.4?
          • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
          • Which cluster modules are available?

          • What's the full path to your personal home/data/scratch directories?

          • Determine how large your personal directories are.
          • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

          Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

          To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

          $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

          To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
          $ cp source target\n

          This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

          $ cp -r sourceDirectory target\n

          A last more complicated example:

          $ cp -a sourceDirectory target\n

          Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
          $ mkdir directory\n

          which will create a directory with the given name inside the current directory.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
          $ mv source target\n

          mv will move the source path to the destination path. Works for both directories as files.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

          Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

          $ rm filename\n
          rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

          You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

          $ rmdir directory\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

          Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

          1. User - a particular user (account)

          2. Group - a particular group of users (may be user-specific group with only one member)

          3. Other - other users in the system

          The permission types are:

          1. Read - For files, this gives permission to read the contents of a file

          2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

          3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

          Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

          $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

          The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

          Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

          $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

          You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

          You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

          However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

          $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

          This will give the user otheruser permissions to write to Project_GoldenDragon

          Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

          Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

          See https://linux.die.net/man/1/setfacl for more information.

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

          Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

          $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

          Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

          $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

          Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

          $ unzip myfile.zip\n

          If we would like to make our own zip archive, we use zip:

          $ zip myfiles.zip myfile1 myfile2 myfile3\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

          Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

          You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

          $ tar -xf tarfile.tar\n

          Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

          $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

          Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

          # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

          If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

          $ tar -c source1 source2 source3 -f tarfile.tar\n
          "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
          1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

          2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

          3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

          4. Remove the another/test directory with a single command.

          5. Rename test to test2. Move test2/hostname.txt to your home directory.

          6. Change the permission of test2 so only you can access it.

          7. Create an empty job script named job.sh, and make it executable.

          8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

          The next chapter is on uploading files, especially important when using HPC-infrastructure.

          "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

          This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

          "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

          To print the current directory, use pwd or \\$PWD:

          $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

          "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

          A very basic and commonly used command is ls, which can be used to list files and directories.

          In its basic usage, it just prints the names of files and directories in the current directory. For example:

          $ ls\nafile.txt some_directory \n

          When provided an argument, it can be used to list the contents of a directory:

          $ ls some_directory \none.txt two.txt\n

          A couple of commonly used options include:

          • detailed listing using ls -l:

            $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • To print the size information in human-readable form, use the -h flag:

            $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • also listing hidden files using the -a flag:

            $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
          • ordering files by the most recent change using -rt:

            $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

          If you try to use ls on a file that doesn't exist, you will get a clear error message:

          $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
          "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

          To change to a different directory, you can use the cd command:

          $ cd some_directory\n

          To change back to the previous directory you were in, there's a shortcut: cd -

          Using cd without an argument results in returning back to your home directory:

          $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

          "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

          The file command can be used to inspect what type of file you're dealing with:

          $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
          "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

          An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

          Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

          A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

          Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

          There are two special relative paths worth mentioning:

          • . is a shorthand for the current directory
          • .. is a shorthand for the parent of the current directory

          You can also use .. when constructing relative paths, for example:

          $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
          "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

          Each file and directory has particular permissions set on it, which can be queried using ls -l.

          For example:

          $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

          The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

          1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
          2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
          3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
          4. the 3rd part r-- indicates that other users only have read permissions

          The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

          1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
          2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

          See also the chmod command later in this manual.

          "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

          find will crawl a series of directories and lists files matching given criteria.

          For example, to look for the file named one.txt:

          $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

          To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

          $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

          A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

          "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
          • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
          • When was your home directory created or last changed?
          • Determine the name of the last changed file in /tmp.
          • See how home directories are organised. Can you access the home directory of other users?

          The next chapter will teach you how to interact with files and directories.

          "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

          To transfer files from and to the HPC, see the section about transferring files of the HPC manual

          "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

          After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

          For example, you may see an error when submitting a job script that was edited on Windows:

          sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

          To fix this problem, you should run the dos2unix command on the file:

          $ dos2unix filename\n
          "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

          As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

          $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
          "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

          Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

          1. Open (\"Read\"): ^R

          2. Save (\"Write Out\"): ^O

          3. Exit: ^X

          More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

          "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

          rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

          You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

          For example, to copy a folder with lots of CSV files:

          $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

          will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

          The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

          To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

          To copy files to your local computer, you can also use rsync:

          $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
          This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

          See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

          "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
          1. Download the file /etc/hostname to your local computer.

          2. Upload a file to a subdirectory of your personal $VSC_DATA space.

          3. Create a file named hello.txt and edit it using nano.

          Now you have a basic understanding, see next chapter for some more in depth concepts.

          "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

          In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

          This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

          If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

          For software installation requests, please use the request form.

          "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

          donphan is the new debug/interactive cluster.

          It replaces slaking, which will be retired on Monday 22 May 2023.

          It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

          This cluster consists of 12 workernodes, each with:

          • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
          • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
          • ~738 GiB of RAM memory;
          • 1.6TB NVME local disk;
          • HDR-100 InfiniBand interconnect;
          • RHEL8 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/donphan\n

          You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

          "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

          The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

          Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

          "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

          The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

          "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

          By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

          The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

          The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

          "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

          Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

          This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

          Warning

          Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

          There are no strong security guarantees regarding data protection when using this shared GPU!

          "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

          gallade is the new large-memory cluster.

          It replaces kirlia, which will be retired on Monday 22 May 2023.

          This cluster consists of 12 workernodes, each with:

          • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
          • ~940 GiB of RAM memory;
          • 1.5TB NVME local disk;
          • HDR-100 InfiniBand interconnect;
          • RHEL8 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/gallade\n

          You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

          "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

          The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

          As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

          Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

          "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

          Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

          It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

          "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

          In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

          This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

          If you have any questions on using shinx, you can contact the HPC-UGent team.

          For software installation requests, please use the request form.

          "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

          shinx is a new CPU-only cluster.

          It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

          It is primarily for regular CPU compute use.

          This cluster consists of 48 workernodes, each with:

          • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
          • ~360 GiB of RAM memory;
          • 400GB local disk;
          • NDR-200 InfiniBand interconnect;
          • RHEL9 as operating system;

          To start using this cluster from a terminal session, first run:

          module swap cluster/shinx\n

          You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

          "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

          The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

          Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

          "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

          The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

          The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

          "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

          As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

          Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

          "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
          • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
          "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

          As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

          module swap cluster/.shinx\n

          Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

          As such, we will have an extended pilot phase in 3 stages:

          "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
          • Minimal cluster to test software and nodes

            • Only 2 or 3 nodes available
            • FDR or EDR infiniband network
            • EL8 OS
          • Retirement of swalot cluster (as of 01 November 2023)

          • Racking of stage 1 nodes
          "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
          • 2/3 cluster size

            • 32 nodes (with max job size of 16 nodes)
            • EDR Infiniband
            • EL8 OS
          • Retirement of victini (as of 05 February 2023)

          • Racking of last 16 nodes
          • Installation of NDR/NDR-200 infiniband network
          "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
          • Full size cluster

            • 48 nodes (no job size limit)
            • NDR-200 Infiniband (single switch Infiniband topology)
            • EL9 OS
          • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

          "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
          • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
          "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

          For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

          module swap env/software/doduo\n

          We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

          "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

          This table gives an overview of all the available software on the different clusters.

          "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABAQUS, load one of these modules using a module load command like:

          module load ABAQUS/2023\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABINIT, load one of these modules using a module load command like:

          module load ABINIT/9.10.3-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABRA2, load one of these modules using a module load command like:

          module load ABRA2/2.23-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABRicate, load one of these modules using a module load command like:

          module load ABRicate/0.9.9-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ABySS, load one of these modules using a module load command like:

          module load ABySS/2.3.7-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ACTC, load one of these modules using a module load command like:

          module load ACTC/1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ADMIXTURE, load one of these modules using a module load command like:

          module load ADMIXTURE/1.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AICSImageIO, load one of these modules using a module load command like:

          module load AICSImageIO/4.14.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMAPVox, load one of these modules using a module load command like:

          module load AMAPVox/1.9.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMICA, load one of these modules using a module load command like:

          module load AMICA/2024.1.19-intel-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMOS, load one of these modules using a module load command like:

          module load AMOS/3.1.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AMPtk, load one of these modules using a module load command like:

          module load AMPtk/1.5.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ANTLR, load one of these modules using a module load command like:

          module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ANTs, load one of these modules using a module load command like:

          module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

          The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using APR-util, load one of these modules using a module load command like:

          module load APR-util/1.6.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using APR, load one of these modules using a module load command like:

          module load APR/1.7.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ARAGORN, load one of these modules using a module load command like:

          module load ARAGORN/1.2.41-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ASCAT, load one of these modules using a module load command like:

          module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ASE, load one of these modules using a module load command like:

          module load ASE/3.22.1-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ATK, load one of these modules using a module load command like:

          module load ATK/2.38.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AUGUSTUS, load one of these modules using a module load command like:

          module load AUGUSTUS/3.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Abseil, load one of these modules using a module load command like:

          module load Abseil/20230125.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AdapterRemoval, load one of these modules using a module load command like:

          module load AdapterRemoval/2.3.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Albumentations, load one of these modules using a module load command like:

          module load Albumentations/1.1.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AlphaFold, load one of these modules using a module load command like:

          module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AlphaPulldown, load one of these modules using a module load command like:

          module load AlphaPulldown/0.30.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Altair-EDEM, load one of these modules using a module load command like:

          module load Altair-EDEM/2021.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Amber, load one of these modules using a module load command like:

          module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AmberMini, load one of these modules using a module load command like:

          module load AmberMini/16.16.0-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AmberTools, load one of these modules using a module load command like:

          module load AmberTools/20-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Anaconda3, load one of these modules using a module load command like:

          module load Anaconda3/2023.03-1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Annocript, load one of these modules using a module load command like:

          module load Annocript/2.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ArchR, load one of these modules using a module load command like:

          module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Archive-Zip, load one of these modules using a module load command like:

          module load Archive-Zip/1.68-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Arlequin, load one of these modules using a module load command like:

          module load Arlequin/3.5.2.2-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Armadillo, load one of these modules using a module load command like:

          module load Armadillo/12.6.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Arrow, load one of these modules using a module load command like:

          module load Arrow/14.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ArviZ, load one of these modules using a module load command like:

          module load ArviZ/0.16.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Aspera-CLI, load one of these modules using a module load command like:

          module load Aspera-CLI/3.9.6.1467.159c5b1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoDock-Vina, load one of these modules using a module load command like:

          module load AutoDock-Vina/1.2.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoGeneS, load one of these modules using a module load command like:

          module load AutoGeneS/1.0.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using AutoMap, load one of these modules using a module load command like:

          module load AutoMap/1.0-foss-2019b-20200324\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Autoconf, load one of these modules using a module load command like:

          module load Autoconf/2.71-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Automake, load one of these modules using a module load command like:

          module load Automake/1.16.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Autotools, load one of these modules using a module load command like:

          module load Autotools/20220317-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Avogadro2, load one of these modules using a module load command like:

          module load Avogadro2/1.97.0-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BAMSurgeon, load one of these modules using a module load command like:

          module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BBMap, load one of these modules using a module load command like:

          module load BBMap/39.01-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BCFtools, load one of these modules using a module load command like:

          module load BCFtools/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BDBag, load one of these modules using a module load command like:

          module load BDBag/1.6.3-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BEDOPS, load one of these modules using a module load command like:

          module load BEDOPS/2.4.41-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BEDTools, load one of these modules using a module load command like:

          module load BEDTools/2.31.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLAST+, load one of these modules using a module load command like:

          module load BLAST+/2.14.1-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLAT, load one of these modules using a module load command like:

          module load BLAT/3.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BLIS, load one of these modules using a module load command like:

          module load BLIS/0.9.0-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BRAKER, load one of these modules using a module load command like:

          module load BRAKER/2.1.6-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BSMAPz, load one of these modules using a module load command like:

          module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BSseeker2, load one of these modules using a module load command like:

          module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BUSCO, load one of these modules using a module load command like:

          module load BUSCO/5.4.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BUStools, load one of these modules using a module load command like:

          module load BUStools/0.43.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BWA, load one of these modules using a module load command like:

          module load BWA/0.7.17-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BamTools, load one of these modules using a module load command like:

          module load BamTools/2.5.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bambi, load one of these modules using a module load command like:

          module load Bambi/0.7.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bandage, load one of these modules using a module load command like:

          module load Bandage/0.9.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BatMeth2, load one of these modules using a module load command like:

          module load BatMeth2/2.1-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayeScEnv, load one of these modules using a module load command like:

          module load BayeScEnv/1.1-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayeScan, load one of these modules using a module load command like:

          module load BayeScan/2.1-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayesAss3-SNPs, load one of these modules using a module load command like:

          module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BayesPrism, load one of these modules using a module load command like:

          module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bazel, load one of these modules using a module load command like:

          module load Bazel/6.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Beast, load one of these modules using a module load command like:

          module load Beast/2.7.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BeautifulSoup, load one of these modules using a module load command like:

          module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BerkeleyGW, load one of these modules using a module load command like:

          module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BiG-SCAPE, load one of these modules using a module load command like:

          module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BigDFT, load one of these modules using a module load command like:

          module load BigDFT/1.9.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BinSanity, load one of these modules using a module load command like:

          module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-DB-HTS, load one of these modules using a module load command like:

          module load Bio-DB-HTS/3.01-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-EUtilities, load one of these modules using a module load command like:

          module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

          module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using BioPerl, load one of these modules using a module load command like:

          module load BioPerl/1.7.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Biopython, load one of these modules using a module load command like:

          module load Biopython/1.83-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bismark, load one of these modules using a module load command like:

          module load Bismark/0.23.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bison, load one of these modules using a module load command like:

          module load Bison/3.8.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blender, load one of these modules using a module load command like:

          module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Block, load one of these modules using a module load command like:

          module load Block/1.5.3-20200525-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blosc, load one of these modules using a module load command like:

          module load Blosc/1.21.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Blosc2, load one of these modules using a module load command like:

          module load Blosc2/2.6.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bonito, load one of these modules using a module load command like:

          module load Bonito/0.4.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bonnie++, load one of these modules using a module load command like:

          module load Bonnie++/2.00a-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.MPI, load one of these modules using a module load command like:

          module load Boost.MPI/1.81.0-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.Python-NumPy, load one of these modules using a module load command like:

          module load Boost.Python-NumPy/1.79.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost.Python, load one of these modules using a module load command like:

          module load Boost.Python/1.79.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Boost, load one of these modules using a module load command like:

          module load Boost/1.82.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bottleneck, load one of these modules using a module load command like:

          module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bowtie, load one of these modules using a module load command like:

          module load Bowtie/1.3.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bowtie2, load one of these modules using a module load command like:

          module load Bowtie2/2.4.5-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Bracken, load one of these modules using a module load command like:

          module load Bracken/2.9-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brotli-python, load one of these modules using a module load command like:

          module load Brotli-python/1.0.9-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brotli, load one of these modules using a module load command like:

          module load Brotli/1.1.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Brunsli, load one of these modules using a module load command like:

          module load Brunsli/0.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CASPR, load one of these modules using a module load command like:

          module load CASPR/20200730-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CCL, load one of these modules using a module load command like:

          module load CCL/1.12.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CD-HIT, load one of these modules using a module load command like:

          module load CD-HIT/4.8.1-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDAT, load one of these modules using a module load command like:

          module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDBtools, load one of these modules using a module load command like:

          module load CDBtools/0.99-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CDO, load one of these modules using a module load command like:

          module load CDO/2.0.5-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CENSO, load one of these modules using a module load command like:

          module load CENSO/1.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CESM-deps, load one of these modules using a module load command like:

          module load CESM-deps/2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CFDEMcoupling, load one of these modules using a module load command like:

          module load CFDEMcoupling/3.8.0-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CFITSIO, load one of these modules using a module load command like:

          module load CFITSIO/4.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CGAL, load one of these modules using a module load command like:

          module load CGAL/5.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CGmapTools, load one of these modules using a module load command like:

          module load CGmapTools/0.1.2-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRCexplorer2, load one of these modules using a module load command like:

          module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRI-long, load one of these modules using a module load command like:

          module load CIRI-long/1.0.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CIRIquant, load one of these modules using a module load command like:

          module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CITE-seq-Count, load one of these modules using a module load command like:

          module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CLEAR, load one of these modules using a module load command like:

          module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CLHEP, load one of these modules using a module load command like:

          module load CLHEP/2.4.6.4-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMAverse, load one of these modules using a module load command like:

          module load CMAverse/20220112-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMSeq, load one of these modules using a module load command like:

          module load CMSeq/1.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CMake, load one of these modules using a module load command like:

          module load CMake/3.27.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using COLMAP, load one of these modules using a module load command like:

          module load COLMAP/3.8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CONCOCT, load one of these modules using a module load command like:

          module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CP2K, load one of these modules using a module load command like:

          module load CP2K/2023.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPC2, load one of these modules using a module load command like:

          module load CPC2/1.0.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPLEX, load one of these modules using a module load command like:

          module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CPPE, load one of these modules using a module load command like:

          module load CPPE/0.3.1-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CREST, load one of these modules using a module load command like:

          module load CREST/2.12-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRISPR-DAV, load one of these modules using a module load command like:

          module load CRISPR-DAV/2.3.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRISPResso2, load one of these modules using a module load command like:

          module load CRISPResso2/2.2.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CRYSTAL17, load one of these modules using a module load command like:

          module load CRYSTAL17/1.0.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CSBDeep, load one of these modules using a module load command like:

          module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUDA, load one of these modules using a module load command like:

          module load CUDA/12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUDAcore, load one of these modules using a module load command like:

          module load CUDAcore/11.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CUnit, load one of these modules using a module load command like:

          module load CUnit/2.1-3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CVXOPT, load one of these modules using a module load command like:

          module load CVXOPT/1.3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Calib, load one of these modules using a module load command like:

          module load Calib/0.3.4-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cantera, load one of these modules using a module load command like:

          module load Cantera/3.0.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CapnProto, load one of these modules using a module load command like:

          module load CapnProto/1.0.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cartopy, load one of these modules using a module load command like:

          module load Cartopy/0.22.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Casanovo, load one of these modules using a module load command like:

          module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatBoost, load one of these modules using a module load command like:

          module load CatBoost/1.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatLearn, load one of these modules using a module load command like:

          module load CatLearn/0.6.2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CatMAP, load one of these modules using a module load command like:

          module load CatMAP/20220519-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Catch2, load one of these modules using a module load command like:

          module load Catch2/2.13.9-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cbc, load one of these modules using a module load command like:

          module load Cbc/2.10.11-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellBender, load one of these modules using a module load command like:

          module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellOracle, load one of these modules using a module load command like:

          module load CellOracle/0.12.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellProfiler, load one of these modules using a module load command like:

          module load CellProfiler/4.2.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRanger-ATAC, load one of these modules using a module load command like:

          module load CellRanger-ATAC/2.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRanger, load one of these modules using a module load command like:

          module load CellRanger/7.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellRank, load one of these modules using a module load command like:

          module load CellRank/2.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CellTypist, load one of these modules using a module load command like:

          module load CellTypist/1.6.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cellpose, load one of these modules using a module load command like:

          module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Centrifuge, load one of these modules using a module load command like:

          module load Centrifuge/1.0.4-beta-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cereal, load one of these modules using a module load command like:

          module load Cereal/1.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ceres-Solver, load one of these modules using a module load command like:

          module load Ceres-Solver/2.2.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cgl, load one of these modules using a module load command like:

          module load Cgl/0.60.8-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CharLS, load one of these modules using a module load command like:

          module load CharLS/2.4.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CheMPS2, load one of these modules using a module load command like:

          module load CheMPS2/1.8.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Check, load one of these modules using a module load command like:

          module load Check/0.15.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CheckM, load one of these modules using a module load command like:

          module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Chimera, load one of these modules using a module load command like:

          module load Chimera/1.16-linux_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Circlator, load one of these modules using a module load command like:

          module load Circlator/1.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Circuitscape, load one of these modules using a module load command like:

          module load Circuitscape/5.12.3-Julia-1.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clair3, load one of these modules using a module load command like:

          module load Clair3/1.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clang, load one of these modules using a module load command like:

          module load Clang/16.0.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clp, load one of these modules using a module load command like:

          module load Clp/1.17.9-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Clustal-Omega, load one of these modules using a module load command like:

          module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ClustalW2, load one of these modules using a module load command like:

          module load ClustalW2/2.1-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CmdStanR, load one of these modules using a module load command like:

          module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CodAn, load one of these modules using a module load command like:

          module load CodAn/1.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CoinUtils, load one of these modules using a module load command like:

          module load CoinUtils/2.11.10-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ColabFold, load one of these modules using a module load command like:

          module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CompareM, load one of these modules using a module load command like:

          module load CompareM/0.1.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

          module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Concorde, load one of these modules using a module load command like:

          module load Concorde/20031219-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CoordgenLibs, load one of these modules using a module load command like:

          module load CoordgenLibs/3.0.1-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CopyKAT, load one of these modules using a module load command like:

          module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Coreutils, load one of these modules using a module load command like:

          module load Coreutils/8.32-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CppUnit, load one of these modules using a module load command like:

          module load CppUnit/1.15.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using CuPy, load one of these modules using a module load command like:

          module load CuPy/8.5.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cufflinks, load one of these modules using a module load command like:

          module load Cufflinks/20190706-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Cython, load one of these modules using a module load command like:

          module load Cython/3.0.8-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DALI, load one of these modules using a module load command like:

          module load DALI/2.1.2-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DAS_Tool, load one of these modules using a module load command like:

          module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DB, load one of these modules using a module load command like:

          module load DB/18.1.40-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBD-mysql, load one of these modules using a module load command like:

          module load DBD-mysql/4.050-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBG2OLC, load one of these modules using a module load command like:

          module load DBG2OLC/20200724-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DB_File, load one of these modules using a module load command like:

          module load DB_File/1.858-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DBus, load one of these modules using a module load command like:

          module load DBus/1.15.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DETONATE, load one of these modules using a module load command like:

          module load DETONATE/1.11-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DFT-D3, load one of these modules using a module load command like:

          module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIA-NN, load one of these modules using a module load command like:

          module load DIA-NN/1.8.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIALOGUE, load one of these modules using a module load command like:

          module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIAMOND, load one of these modules using a module load command like:

          module load DIAMOND/2.1.8-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIANA, load one of these modules using a module load command like:

          module load DIANA/10.5\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DIRAC, load one of these modules using a module load command like:

          module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DL_POLY_Classic, load one of these modules using a module load command like:

          module load DL_POLY_Classic/1.10-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DMCfun, load one of these modules using a module load command like:

          module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DOLFIN, load one of these modules using a module load command like:

          module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DRAGMAP, load one of these modules using a module load command like:

          module load DRAGMAP/1.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DROP, load one of these modules using a module load command like:

          module load DROP/1.1.0-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DUBStepR, load one of these modules using a module load command like:

          module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dakota, load one of these modules using a module load command like:

          module load Dakota/6.16.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dalton, load one of these modules using a module load command like:

          module load Dalton/2020.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DeepLoc, load one of these modules using a module load command like:

          module load DeepLoc/2.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Delly, load one of these modules using a module load command like:

          module load Delly/0.8.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DendroPy, load one of these modules using a module load command like:

          module load DendroPy/4.6.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DensPart, load one of these modules using a module load command like:

          module load DensPart/20220603-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Deprecated, load one of these modules using a module load command like:

          module load Deprecated/1.2.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DiCE-ML, load one of these modules using a module load command like:

          module load DiCE-ML/0.9-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dice, load one of these modules using a module load command like:

          module load Dice/20240101-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DoubletFinder, load one of these modules using a module load command like:

          module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Doxygen, load one of these modules using a module load command like:

          module load Doxygen/1.9.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Dsuite, load one of these modules using a module load command like:

          module load Dsuite/20210718-intel-compilers-2021.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DualSPHysics, load one of these modules using a module load command like:

          module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using DyMat, load one of these modules using a module load command like:

          module load DyMat/0.7-foss-2021b-2020-12-12\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EDirect, load one of these modules using a module load command like:

          module load EDirect/20.5.20231006-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ELPA, load one of these modules using a module load command like:

          module load ELPA/2021.05.001-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EMBOSS, load one of these modules using a module load command like:

          module load EMBOSS/6.6.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESM-2, load one of these modules using a module load command like:

          module load ESM-2/2.0.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESMF, load one of these modules using a module load command like:

          module load ESMF/8.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ESMPy, load one of these modules using a module load command like:

          module load ESMPy/8.0.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ETE, load one of these modules using a module load command like:

          module load ETE/3.1.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EUKulele, load one of these modules using a module load command like:

          module load EUKulele/2.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EasyBuild, load one of these modules using a module load command like:

          module load EasyBuild/4.9.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Eigen, load one of these modules using a module load command like:

          module load Eigen/3.4.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Elk, load one of these modules using a module load command like:

          module load Elk/7.0.12-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using EpiSCORE, load one of these modules using a module load command like:

          module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

          module load Excel-Writer-XLSX/1.09-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Exonerate, load one of these modules using a module load command like:

          module load Exonerate/2.4.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ExtremeLy, load one of these modules using a module load command like:

          module load ExtremeLy/2.3.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FALCON, load one of these modules using a module load command like:

          module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FASTA, load one of these modules using a module load command like:

          module load FASTA/36.3.8i-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FASTX-Toolkit, load one of these modules using a module load command like:

          module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FDS, load one of these modules using a module load command like:

          module load FDS/6.8.0-intel-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FEniCS, load one of these modules using a module load command like:

          module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFAVES, load one of these modules using a module load command like:

          module load FFAVES/2022.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFC, load one of these modules using a module load command like:

          module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFTW.MPI, load one of these modules using a module load command like:

          module load FFTW.MPI/3.3.10-gompi-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFTW, load one of these modules using a module load command like:

          module load FFTW/3.3.10-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FFmpeg, load one of these modules using a module load command like:

          module load FFmpeg/6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FIAT, load one of these modules using a module load command like:

          module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FIGARO, load one of these modules using a module load command like:

          module load FIGARO/1.1.2-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLAC, load one of these modules using a module load command like:

          module load FLAC/1.4.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLAIR, load one of these modules using a module load command like:

          module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLANN, load one of these modules using a module load command like:

          module load FLANN/1.9.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLASH, load one of these modules using a module load command like:

          module load FLASH/2.2.00-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLTK, load one of these modules using a module load command like:

          module load FLTK/1.3.5-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FLUENT, load one of these modules using a module load command like:

          module load FLUENT/2023R1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FMM3D, load one of these modules using a module load command like:

          module load FMM3D/20211018-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FMPy, load one of these modules using a module load command like:

          module load FMPy/0.3.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FSL, load one of these modules using a module load command like:

          module load FSL/6.0.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FabIO, load one of these modules using a module load command like:

          module load FabIO/0.11.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Faiss, load one of these modules using a module load command like:

          module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastANI, load one of these modules using a module load command like:

          module load FastANI/1.34-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastME, load one of these modules using a module load command like:

          module load FastME/2.1.6.3-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastQC, load one of these modules using a module load command like:

          module load FastQC/0.11.9-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastQ_Screen, load one of these modules using a module load command like:

          module load FastQ_Screen/0.14.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastTree, load one of these modules using a module load command like:

          module load FastTree/2.1.11-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FastViromeExplorer, load one of these modules using a module load command like:

          module load FastViromeExplorer/20180422-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fastaq, load one of these modules using a module load command like:

          module load Fastaq/3.17.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fiji, load one of these modules using a module load command like:

          module load Fiji/2.9.0-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Filtlong, load one of these modules using a module load command like:

          module load Filtlong/0.2.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Fiona, load one of these modules using a module load command like:

          module load Fiona/1.9.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Flask, load one of these modules using a module load command like:

          module load Flask/2.2.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FlexiBLAS, load one of these modules using a module load command like:

          module load FlexiBLAS/3.3.1-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Flye, load one of these modules using a module load command like:

          module load Flye/2.9.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FragGeneScan, load one of these modules using a module load command like:

          module load FragGeneScan/1.31-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeBarcodes, load one of these modules using a module load command like:

          module load FreeBarcodes/3.0.a5-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeFEM, load one of these modules using a module load command like:

          module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeImage, load one of these modules using a module load command like:

          module load FreeImage/3.18.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeSurfer, load one of these modules using a module load command like:

          module load FreeSurfer/7.3.2-centos8_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FreeXL, load one of these modules using a module load command like:

          module load FreeXL/1.0.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FriBidi, load one of these modules using a module load command like:

          module load FriBidi/1.0.12-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FuSeq, load one of these modules using a module load command like:

          module load FuSeq/1.1.2-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

          The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using FusionCatcher, load one of these modules using a module load command like:

          module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GAPPadder, load one of these modules using a module load command like:

          module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATB-Core, load one of these modules using a module load command like:

          module load GATB-Core/1.4.2-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATE, load one of these modules using a module load command like:

          module load GATE/9.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GATK, load one of these modules using a module load command like:

          module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GBprocesS, load one of these modules using a module load command like:

          module load GBprocesS/4.0.0.post1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GCC, load one of these modules using a module load command like:

          module load GCC/13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GCCcore, load one of these modules using a module load command like:

          module load GCCcore/13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GConf, load one of these modules using a module load command like:

          module load GConf/3.2.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDAL, load one of these modules using a module load command like:

          module load GDAL/3.7.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDB, load one of these modules using a module load command like:

          module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDCM, load one of these modules using a module load command like:

          module load GDCM/3.0.21-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDGraph, load one of these modules using a module load command like:

          module load GDGraph/1.56-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GDRCopy, load one of these modules using a module load command like:

          module load GDRCopy/2.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GEGL, load one of these modules using a module load command like:

          module load GEGL/0.4.30-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GEOS, load one of these modules using a module load command like:

          module load GEOS/3.12.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GFF3-toolkit, load one of these modules using a module load command like:

          module load GFF3-toolkit/2.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GIMP, load one of these modules using a module load command like:

          module load GIMP/2.10.24-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GL2PS, load one of these modules using a module load command like:

          module load GL2PS/1.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLFW, load one of these modules using a module load command like:

          module load GLFW/3.3.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLIMPSE, load one of these modules using a module load command like:

          module load GLIMPSE/2.0.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLM, load one of these modules using a module load command like:

          module load GLM/0.9.9.8-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLPK, load one of these modules using a module load command like:

          module load GLPK/5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLib, load one of these modules using a module load command like:

          module load GLib/2.77.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GLibmm, load one of these modules using a module load command like:

          module load GLibmm/2.66.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GMAP-GSNAP, load one of these modules using a module load command like:

          module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GMP, load one of these modules using a module load command like:

          module load GMP/6.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GOATOOLS, load one of these modules using a module load command like:

          module load GOATOOLS/1.3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GObject-Introspection, load one of these modules using a module load command like:

          module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPAW-setups, load one of these modules using a module load command like:

          module load GPAW-setups/0.9.20000\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPAW, load one of these modules using a module load command like:

          module load GPAW/22.8.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPy, load one of these modules using a module load command like:

          module load GPy/1.10.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPyOpt, load one of these modules using a module load command like:

          module load GPyOpt/1.2.6-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GPyTorch, load one of these modules using a module load command like:

          module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GRASP-suite, load one of these modules using a module load command like:

          module load GRASP-suite/2023-05-09-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GRASS, load one of these modules using a module load command like:

          module load GRASS/8.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GROMACS, load one of these modules using a module load command like:

          module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GSL, load one of these modules using a module load command like:

          module load GSL/2.7-intel-compilers-2021.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GST-plugins-bad, load one of these modules using a module load command like:

          module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GST-plugins-base, load one of these modules using a module load command like:

          module load GST-plugins-base/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GStreamer, load one of these modules using a module load command like:

          module load GStreamer/1.20.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTDB-Tk, load one of these modules using a module load command like:

          module load GTDB-Tk/2.3.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK+, load one of these modules using a module load command like:

          module load GTK+/3.24.23-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK2, load one of these modules using a module load command like:

          module load GTK2/2.24.33-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK3, load one of these modules using a module load command like:

          module load GTK3/3.24.37-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTK4, load one of these modules using a module load command like:

          module load GTK4/4.7.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GTS, load one of these modules using a module load command like:

          module load GTS/0.7.6-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GUSHR, load one of these modules using a module load command like:

          module load GUSHR/2020-09-28-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GapFiller, load one of these modules using a module load command like:

          module load GapFiller/2.1.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gaussian, load one of these modules using a module load command like:

          module load Gaussian/g16_C.01-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gblocks, load one of these modules using a module load command like:

          module load Gblocks/0.91b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gdk-Pixbuf, load one of these modules using a module load command like:

          module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Geant4, load one of these modules using a module load command like:

          module load Geant4/11.0.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GeneMark-ET, load one of these modules using a module load command like:

          module load GeneMark-ET/4.71-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GenomeThreader, load one of these modules using a module load command like:

          module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GenomeWorks, load one of these modules using a module load command like:

          module load GenomeWorks/2021.02.2-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gerris, load one of these modules using a module load command like:

          module load Gerris/20131206-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GetOrganelle, load one of these modules using a module load command like:

          module load GetOrganelle/1.7.5.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GffCompare, load one of these modules using a module load command like:

          module load GffCompare/0.12.6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ghostscript, load one of these modules using a module load command like:

          module load Ghostscript/10.01.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GimmeMotifs, load one of these modules using a module load command like:

          module load GimmeMotifs/0.17.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Giotto-Suite, load one of these modules using a module load command like:

          module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GitPython, load one of these modules using a module load command like:

          module load GitPython/3.1.40-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GlimmerHMM, load one of these modules using a module load command like:

          module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GlobalArrays, load one of these modules using a module load command like:

          module load GlobalArrays/5.8-iomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GnuTLS, load one of these modules using a module load command like:

          module load GnuTLS/3.7.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Go, load one of these modules using a module load command like:

          module load Go/1.21.6\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gradle, load one of these modules using a module load command like:

          module load Gradle/8.6-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphMap, load one of these modules using a module load command like:

          module load GraphMap/0.5.2-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphMap2, load one of these modules using a module load command like:

          module load GraphMap2/0.6.4-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Graphene, load one of these modules using a module load command like:

          module load Graphene/1.10.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GraphicsMagick, load one of these modules using a module load command like:

          module load GraphicsMagick/1.3.34-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Graphviz, load one of these modules using a module load command like:

          module load Graphviz/8.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Greenlet, load one of these modules using a module load command like:

          module load Greenlet/2.0.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using GroIMP, load one of these modules using a module load command like:

          module load GroIMP/1.5-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Guile, load one of these modules using a module load command like:

          module load Guile/3.0.7-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Guppy, load one of these modules using a module load command like:

          module load Guppy/6.5.7-gpu\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Gurobi, load one of these modules using a module load command like:

          module load Gurobi/11.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HAL, load one of these modules using a module load command like:

          module load HAL/2.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDBSCAN, load one of these modules using a module load command like:

          module load HDBSCAN/0.8.29-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDDM, load one of these modules using a module load command like:

          module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDF, load one of these modules using a module load command like:

          module load HDF/4.2.16-2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HDF5, load one of these modules using a module load command like:

          module load HDF5/1.14.0-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HH-suite, load one of these modules using a module load command like:

          module load HH-suite/3.3.0-gompic-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HISAT2, load one of these modules using a module load command like:

          module load HISAT2/2.2.1-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HMMER, load one of these modules using a module load command like:

          module load HMMER/3.4-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HMMER2, load one of these modules using a module load command like:

          module load HMMER2/2.3.2-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HPL, load one of these modules using a module load command like:

          module load HPL/2.3-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSeq, load one of these modules using a module load command like:

          module load HTSeq/2.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSlib, load one of these modules using a module load command like:

          module load HTSlib/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HTSplotter, load one of these modules using a module load command like:

          module load HTSplotter/2.11-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hadoop, load one of these modules using a module load command like:

          module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HarfBuzz, load one of these modules using a module load command like:

          module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HiCExplorer, load one of these modules using a module load command like:

          module load HiCExplorer/3.7.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HiCMatrix, load one of these modules using a module load command like:

          module load HiCMatrix/17-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HighFive, load one of these modules using a module load command like:

          module load HighFive/2.7.1-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Highway, load one of these modules using a module load command like:

          module load Highway/1.0.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Horovod, load one of these modules using a module load command like:

          module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using HyPo, load one of these modules using a module load command like:

          module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hybpiper, load one of these modules using a module load command like:

          module load Hybpiper/2.1.6-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hydra, load one of these modules using a module load command like:

          module load Hydra/1.1.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hyperopt, load one of these modules using a module load command like:

          module load Hyperopt/0.2.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Hypre, load one of these modules using a module load command like:

          module load Hypre/2.25.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ICU, load one of these modules using a module load command like:

          module load ICU/73.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IDBA-UD, load one of these modules using a module load command like:

          module load IDBA-UD/1.1.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IGMPlot, load one of these modules using a module load command like:

          module load IGMPlot/2.4.2-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IGV, load one of these modules using a module load command like:

          module load IGV/2.9.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IOR, load one of these modules using a module load command like:

          module load IOR/3.2.1-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IPython, load one of these modules using a module load command like:

          module load IPython/8.14.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IQ-TREE, load one of these modules using a module load command like:

          module load IQ-TREE/2.2.2.6-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IRkernel, load one of these modules using a module load command like:

          module load IRkernel/1.2-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ISA-L, load one of these modules using a module load command like:

          module load ISA-L/2.30.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ITK, load one of these modules using a module load command like:

          module load ITK/5.2.1-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ImageMagick, load one of these modules using a module load command like:

          module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Imath, load one of these modules using a module load command like:

          module load Imath/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Inferelator, load one of these modules using a module load command like:

          module load Inferelator/0.6.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Infernal, load one of these modules using a module load command like:

          module load Infernal/1.1.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using InterProScan, load one of these modules using a module load command like:

          module load InterProScan/5.62-94.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IonQuant, load one of these modules using a module load command like:

          module load IonQuant/1.10.12-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IsoQuant, load one of these modules using a module load command like:

          module load IsoQuant/3.3.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using IsoSeq, load one of these modules using a module load command like:

          module load IsoSeq/4.0.0-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JAGS, load one of these modules using a module load command like:

          module load JAGS/4.3.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JSON-GLib, load one of these modules using a module load command like:

          module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Jansson, load one of these modules using a module load command like:

          module load Jansson/2.13.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JasPer, load one of these modules using a module load command like:

          module load JasPer/4.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Java, load one of these modules using a module load command like:

          module load Java/17.0.6\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Jellyfish, load one of these modules using a module load command like:

          module load Jellyfish/2.3.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JsonCpp, load one of these modules using a module load command like:

          module load JsonCpp/1.9.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Judy, load one of these modules using a module load command like:

          module load Judy/1.0.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Julia, load one of these modules using a module load command like:

          module load Julia/1.9.3-linux-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterHub, load one of these modules using a module load command like:

          module load JupyterHub/4.0.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterLab, load one of these modules using a module load command like:

          module load JupyterLab/4.0.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

          The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using JupyterNotebook, load one of these modules using a module load command like:

          module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KMC, load one of these modules using a module load command like:

          module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KaHIP, load one of these modules using a module load command like:

          module load KaHIP/3.14-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kaleido, load one of these modules using a module load command like:

          module load Kaleido/0.1.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kalign, load one of these modules using a module load command like:

          module load Kalign/3.3.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kent_tools, load one of these modules using a module load command like:

          module load Kent_tools/20190326-linux.x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Keras, load one of these modules using a module load command like:

          module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KerasTuner, load one of these modules using a module load command like:

          module load KerasTuner/1.3.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kraken, load one of these modules using a module load command like:

          module load Kraken/1.1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Kraken2, load one of these modules using a module load command like:

          module load Kraken2/2.1.2-gompi-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KrakenUniq, load one of these modules using a module load command like:

          module load KrakenUniq/1.0.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using KronaTools, load one of these modules using a module load command like:

          module load KronaTools/2.8.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAME, load one of these modules using a module load command like:

          module load LAME/3.100-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAMMPS, load one of these modules using a module load command like:

          module load LAMMPS/patch_20Nov2019-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LAST, load one of these modules using a module load command like:

          module load LAST/1179-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LASTZ, load one of these modules using a module load command like:

          module load LASTZ/1.04.22-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LDC, load one of these modules using a module load command like:

          module load LDC/1.30.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LERC, load one of these modules using a module load command like:

          module load LERC/4.0.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LIANA+, load one of these modules using a module load command like:

          module load LIANA+/1.0.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LIBSVM, load one of these modules using a module load command like:

          module load LIBSVM/3.30-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LLVM, load one of these modules using a module load command like:

          module load LLVM/16.0.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LMDB, load one of these modules using a module load command like:

          module load LMDB/0.9.31-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LMfit, load one of these modules using a module load command like:

          module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LPJmL, load one of these modules using a module load command like:

          module load LPJmL/4.0.003-iimpi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LPeg, load one of these modules using a module load command like:

          module load LPeg/1.0.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LSD2, load one of these modules using a module load command like:

          module load LSD2/2.4.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LUMPY, load one of these modules using a module load command like:

          module load LUMPY/0.3.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LZO, load one of these modules using a module load command like:

          module load LZO/2.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using L_RNA_scaffolder, load one of these modules using a module load command like:

          module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lace, load one of these modules using a module load command like:

          module load Lace/1.14.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LevelDB, load one of these modules using a module load command like:

          module load LevelDB/1.22-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Levenshtein, load one of these modules using a module load command like:

          module load Levenshtein/0.24.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LiBis, load one of these modules using a module load command like:

          module load LiBis/20200428-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibLZF, load one of these modules using a module load command like:

          module load LibLZF/3.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibSoup, load one of these modules using a module load command like:

          module load LibSoup/3.0.7-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LibTIFF, load one of these modules using a module load command like:

          module load LibTIFF/4.6.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Libint, load one of these modules using a module load command like:

          module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lighter, load one of these modules using a module load command like:

          module load Lighter/1.1.2-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LittleCMS, load one of these modules using a module load command like:

          module load LittleCMS/2.15-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LncLOOM, load one of these modules using a module load command like:

          module load LncLOOM/2.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LoRDEC, load one of these modules using a module load command like:

          module load LoRDEC/0.9-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Longshot, load one of these modules using a module load command like:

          module load Longshot/0.4.5-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

          The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using LtrDetector, load one of these modules using a module load command like:

          module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Lua, load one of these modules using a module load command like:

          module load Lua/5.4.6-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using M1QN3, load one of these modules using a module load command like:

          module load M1QN3/3.3-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using M4, load one of these modules using a module load command like:

          module load M4/1.4.19-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MACS2, load one of these modules using a module load command like:

          module load MACS2/2.2.7.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MACS3, load one of these modules using a module load command like:

          module load MACS3/3.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MAFFT, load one of these modules using a module load command like:

          module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MAGeCK, load one of these modules using a module load command like:

          module load MAGeCK/0.5.9.5-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MARS, load one of these modules using a module load command like:

          module load MARS/20191101-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MATIO, load one of these modules using a module load command like:

          module load MATIO/1.5.17-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MATLAB, load one of these modules using a module load command like:

          module load MATLAB/2022b-r5\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MBROLA, load one of these modules using a module load command like:

          module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MCL, load one of these modules using a module load command like:

          module load MCL/22.282-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MDAnalysis, load one of these modules using a module load command like:

          module load MDAnalysis/2.4.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MDTraj, load one of these modules using a module load command like:

          module load MDTraj/1.9.7-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGA, load one of these modules using a module load command like:

          module load MEGA/11.0.10\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGAHIT, load one of these modules using a module load command like:

          module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEGAN, load one of these modules using a module load command like:

          module load MEGAN/6.25.3-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEM, load one of these modules using a module load command like:

          module load MEM/20191023-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MEME, load one of these modules using a module load command like:

          module load MEME/5.5.4-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MESS, load one of these modules using a module load command like:

          module load MESS/0.1.6-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using METIS, load one of these modules using a module load command like:

          module load METIS/5.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MIGRATE-N, load one of these modules using a module load command like:

          module load MIGRATE-N/5.0.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MMseqs2, load one of these modules using a module load command like:

          module load MMseqs2/14-7e284-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MOABS, load one of these modules using a module load command like:

          module load MOABS/1.3.9.6-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MONAI, load one of these modules using a module load command like:

          module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MOOSE, load one of these modules using a module load command like:

          module load MOOSE/2022-06-10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MPC, load one of these modules using a module load command like:

          module load MPC/1.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MPFR, load one of these modules using a module load command like:

          module load MPFR/4.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MRtrix, load one of these modules using a module load command like:

          module load MRtrix/3.0.4-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MSFragger, load one of these modules using a module load command like:

          module load MSFragger/4.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUMPS, load one of these modules using a module load command like:

          module load MUMPS/5.6.1-foss-2023a-metis\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUMmer, load one of these modules using a module load command like:

          module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MUSCLE, load one of these modules using a module load command like:

          module load MUSCLE/5.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MXNet, load one of these modules using a module load command like:

          module load MXNet/1.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MaSuRCA, load one of these modules using a module load command like:

          module load MaSuRCA/4.1.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mako, load one of these modules using a module load command like:

          module load Mako/1.2.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MariaDB-connector-c, load one of these modules using a module load command like:

          module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MariaDB, load one of these modules using a module load command like:

          module load MariaDB/10.9.3-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mash, load one of these modules using a module load command like:

          module load Mash/2.3-intel-compilers-2021.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Maven, load one of these modules using a module load command like:

          module load Maven/3.6.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MaxBin, load one of these modules using a module load command like:

          module load MaxBin/2.2.7-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MedPy, load one of these modules using a module load command like:

          module load MedPy/0.4.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Megalodon, load one of these modules using a module load command like:

          module load Megalodon/2.3.5-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mercurial, load one of these modules using a module load command like:

          module load Mercurial/6.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mesa, load one of these modules using a module load command like:

          module load Mesa/23.1.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Meson, load one of these modules using a module load command like:

          module load Meson/1.2.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mesquite, load one of these modules using a module load command like:

          module load Mesquite/2.3.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaBAT, load one of these modules using a module load command like:

          module load MetaBAT/2.15-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaEuk, load one of these modules using a module load command like:

          module load MetaEuk/6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MetaPhlAn, load one of these modules using a module load command like:

          module load MetaPhlAn/4.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Metagenome-Atlas, load one of these modules using a module load command like:

          module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MethylDackel, load one of these modules using a module load command like:

          module load MethylDackel/0.5.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MiXCR, load one of these modules using a module load command like:

          module load MiXCR/4.6.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MicrobeAnnotator, load one of these modules using a module load command like:

          module load MicrobeAnnotator/2.0.5-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mikado, load one of these modules using a module load command like:

          module load Mikado/2.3.4-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MinCED, load one of these modules using a module load command like:

          module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MinPath, load one of these modules using a module load command like:

          module load MinPath/1.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Miniconda3, load one of these modules using a module load command like:

          module load Miniconda3/23.5.2-0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Minipolish, load one of these modules using a module load command like:

          module load Minipolish/0.1.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MitoHiFi, load one of these modules using a module load command like:

          module load MitoHiFi/3.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ModelTest-NG, load one of these modules using a module load command like:

          module load ModelTest-NG/0.1.7-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Molden, load one of these modules using a module load command like:

          module load Molden/6.8-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Molekel, load one of these modules using a module load command like:

          module load Molekel/5.4.0-Linux_x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Mono, load one of these modules using a module load command like:

          module load Mono/6.8.0.105-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Monocle3, load one of these modules using a module load command like:

          module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MrBayes, load one of these modules using a module load command like:

          module load MrBayes/3.2.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MuJoCo, load one of these modules using a module load command like:

          module load MuJoCo/2.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MultiQC, load one of these modules using a module load command like:

          module load MultiQC/1.14-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MultilevelEstimators, load one of these modules using a module load command like:

          module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Multiwfn, load one of these modules using a module load command like:

          module load Multiwfn/3.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using MyCC, load one of these modules using a module load command like:

          module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Myokit, load one of these modules using a module load command like:

          module load Myokit/1.32.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NAMD, load one of these modules using a module load command like:

          module load NAMD/2.14-foss-2023a-mpi\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NASM, load one of these modules using a module load command like:

          module load NASM/2.16.01-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCCL, load one of these modules using a module load command like:

          module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCL, load one of these modules using a module load command like:

          module load NCL/6.6.2-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NCO, load one of these modules using a module load command like:

          module load NCO/5.0.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NECI, load one of these modules using a module load command like:

          module load NECI/20230620-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NEURON, load one of these modules using a module load command like:

          module load NEURON/7.8.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NGS, load one of these modules using a module load command like:

          module load NGS/2.11.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NGSpeciesID, load one of these modules using a module load command like:

          module load NGSpeciesID/0.1.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLMpy, load one of these modules using a module load command like:

          module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLTK, load one of these modules using a module load command like:

          module load NLTK/3.8.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NLopt, load one of these modules using a module load command like:

          module load NLopt/2.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NOVOPlasty, load one of these modules using a module load command like:

          module load NOVOPlasty/3.7-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NSPR, load one of these modules using a module load command like:

          module load NSPR/4.35-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NSS, load one of these modules using a module load command like:

          module load NSS/3.89.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NVHPC, load one of these modules using a module load command like:

          module load NVHPC/21.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoCaller, load one of these modules using a module load command like:

          module load NanoCaller/3.4.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoComp, load one of these modules using a module load command like:

          module load NanoComp/1.13.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoFilt, load one of these modules using a module load command like:

          module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoPlot, load one of these modules using a module load command like:

          module load NanoPlot/1.33.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanoStat, load one of these modules using a module load command like:

          module load NanoStat/1.6.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NanopolishComp, load one of these modules using a module load command like:

          module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NetPyNE, load one of these modules using a module load command like:

          module load NetPyNE/1.0.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NewHybrids, load one of these modules using a module load command like:

          module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NextGenMap, load one of these modules using a module load command like:

          module load NextGenMap/0.5.5-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nextflow, load one of these modules using a module load command like:

          module load Nextflow/23.10.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using NiBabel, load one of these modules using a module load command like:

          module load NiBabel/4.0.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nim, load one of these modules using a module load command like:

          module load Nim/1.6.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ninja, load one of these modules using a module load command like:

          module load Ninja/1.11.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Nipype, load one of these modules using a module load command like:

          module load Nipype/1.8.5-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OBITools3, load one of these modules using a module load command like:

          module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ONNX-Runtime, load one of these modules using a module load command like:

          module load ONNX-Runtime/1.16.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ONNX, load one of these modules using a module load command like:

          module load ONNX/1.15.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OPERA-MS, load one of these modules using a module load command like:

          module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ORCA, load one of these modules using a module load command like:

          module load ORCA/5.0.4-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

          module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Oases, load one of these modules using a module load command like:

          module load Oases/20180312-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Omnipose, load one of these modules using a module load command like:

          module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenAI-Gym, load one of these modules using a module load command like:

          module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenBLAS, load one of these modules using a module load command like:

          module load OpenBLAS/0.3.24-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenBabel, load one of these modules using a module load command like:

          module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenCV, load one of these modules using a module load command like:

          module load OpenCV/4.6.0-foss-2022a-contrib\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenCoarrays, load one of these modules using a module load command like:

          module load OpenCoarrays/2.8.0-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenEXR, load one of these modules using a module load command like:

          module load OpenEXR/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFOAM-Extend, load one of these modules using a module load command like:

          module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFOAM, load one of these modules using a module load command like:

          module load OpenFOAM/v2206-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFace, load one of these modules using a module load command like:

          module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenFold, load one of these modules using a module load command like:

          module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenForceField, load one of these modules using a module load command like:

          module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenImageIO, load one of these modules using a module load command like:

          module load OpenImageIO/2.0.12-iimpi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenJPEG, load one of these modules using a module load command like:

          module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMM-PLUMED, load one of these modules using a module load command like:

          module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMM, load one of these modules using a module load command like:

          module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMMTools, load one of these modules using a module load command like:

          module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMPI, load one of these modules using a module load command like:

          module load OpenMPI/4.1.6-GCC-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenMolcas, load one of these modules using a module load command like:

          module load OpenMolcas/21.06-iomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenPGM, load one of these modules using a module load command like:

          module load OpenPGM/5.2.122-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenPIV, load one of these modules using a module load command like:

          module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSSL, load one of these modules using a module load command like:

          module load OpenSSL/1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSees, load one of these modules using a module load command like:

          module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSlide-Java, load one of these modules using a module load command like:

          module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OpenSlide, load one of these modules using a module load command like:

          module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Optuna, load one of these modules using a module load command like:

          module load Optuna/3.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using OrthoFinder, load one of these modules using a module load command like:

          module load OrthoFinder/2.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Osi, load one of these modules using a module load command like:

          module load Osi/0.108.9-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PASA, load one of these modules using a module load command like:

          module load PASA/2.5.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PBGZIP, load one of these modules using a module load command like:

          module load PBGZIP/20160804-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PCRE, load one of these modules using a module load command like:

          module load PCRE/8.45-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PCRE2, load one of these modules using a module load command like:

          module load PCRE2/10.42-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PEAR, load one of these modules using a module load command like:

          module load PEAR/0.9.11-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PETSc, load one of these modules using a module load command like:

          module load PETSc/3.18.4-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PHYLIP, load one of these modules using a module load command like:

          module load PHYLIP/3.697-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PICRUSt2, load one of these modules using a module load command like:

          module load PICRUSt2/2.5.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLAMS, load one of these modules using a module load command like:

          module load PLAMS/1.5.1-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLINK, load one of these modules using a module load command like:

          module load PLINK/2.00a3.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLUMED, load one of these modules using a module load command like:

          module load PLUMED/2.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PLY, load one of these modules using a module load command like:

          module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PMIx, load one of these modules using a module load command like:

          module load PMIx/4.2.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using POT, load one of these modules using a module load command like:

          module load POT/0.9.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using POV-Ray, load one of these modules using a module load command like:

          module load POV-Ray/3.7.0.8-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PPanGGOLiN, load one of these modules using a module load command like:

          module load PPanGGOLiN/1.1.136-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRANK, load one of these modules using a module load command like:

          module load PRANK/170427-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRINSEQ, load one of these modules using a module load command like:

          module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PRISMS-PF, load one of these modules using a module load command like:

          module load PRISMS-PF/2.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PROJ, load one of these modules using a module load command like:

          module load PROJ/9.2.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pandoc, load one of these modules using a module load command like:

          module load Pandoc/2.13\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pango, load one of these modules using a module load command like:

          module load Pango/1.50.14-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParMETIS, load one of these modules using a module load command like:

          module load ParMETIS/4.0.3-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParMGridGen, load one of these modules using a module load command like:

          module load ParMGridGen/1.0-iimpi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParaView, load one of these modules using a module load command like:

          module load ParaView/5.11.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ParmEd, load one of these modules using a module load command like:

          module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Parsl, load one of these modules using a module load command like:

          module load Parsl/2023.7.17-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PartitionFinder, load one of these modules using a module load command like:

          module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

          module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Perl, load one of these modules using a module load command like:

          module load Perl/5.38.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Phenoflow, load one of these modules using a module load command like:

          module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PhyloPhlAn, load one of these modules using a module load command like:

          module load PhyloPhlAn/3.0.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pillow-SIMD, load one of these modules using a module load command like:

          module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pillow, load one of these modules using a module load command like:

          module load Pillow/10.2.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pilon, load one of these modules using a module load command like:

          module load Pilon/1.23-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pint, load one of these modules using a module load command like:

          module load Pint/0.22-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PnetCDF, load one of these modules using a module load command like:

          module load PnetCDF/1.12.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Porechop, load one of these modules using a module load command like:

          module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PostgreSQL, load one of these modules using a module load command like:

          module load PostgreSQL/16.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Primer3, load one of these modules using a module load command like:

          module load Primer3/2.5.0-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ProBiS, load one of these modules using a module load command like:

          module load ProBiS/20230403-gompi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ProtHint, load one of these modules using a module load command like:

          module load ProtHint/2.6.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PsiCLASS, load one of these modules using a module load command like:

          module load PsiCLASS/1.0.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PuLP, load one of these modules using a module load command like:

          module load PuLP/2.8.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyBerny, load one of these modules using a module load command like:

          module load PyBerny/0.6.3-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCairo, load one of these modules using a module load command like:

          module load PyCairo/1.21.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCalib, load one of these modules using a module load command like:

          module load PyCalib/20230531-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyCheMPS2, load one of these modules using a module load command like:

          module load PyCheMPS2/1.8.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyFoam, load one of these modules using a module load command like:

          module load PyFoam/2020.5-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyGEOS, load one of these modules using a module load command like:

          module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyGObject, load one of these modules using a module load command like:

          module load PyGObject/3.42.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyInstaller, load one of these modules using a module load command like:

          module load PyInstaller/6.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyKeOps, load one of these modules using a module load command like:

          module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMC, load one of these modules using a module load command like:

          module load PyMC/5.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMC3, load one of these modules using a module load command like:

          module load PyMC3/3.11.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMDE, load one of these modules using a module load command like:

          module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyMOL, load one of these modules using a module load command like:

          module load PyMOL/2.5.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOD, load one of these modules using a module load command like:

          module load PyOD/0.8.7-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOpenCL, load one of these modules using a module load command like:

          module load PyOpenCL/2023.1.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyOpenGL, load one of these modules using a module load command like:

          module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyPy, load one of these modules using a module load command like:

          module load PyPy/7.3.12-3.10\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyQt5, load one of these modules using a module load command like:

          module load PyQt5/5.15.7-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyQtGraph, load one of these modules using a module load command like:

          module load PyQtGraph/0.13.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyRETIS, load one of these modules using a module load command like:

          module load PyRETIS/2.5.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyRe, load one of these modules using a module load command like:

          module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PySCF, load one of these modules using a module load command like:

          module load PySCF/2.4.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyStan, load one of these modules using a module load command like:

          module load PyStan/2.19.1.1-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTables, load one of these modules using a module load command like:

          module load PyTables/3.8.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTensor, load one of these modules using a module load command like:

          module load PyTensor/2.17.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Geometric, load one of these modules using a module load command like:

          module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Ignite, load one of these modules using a module load command like:

          module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch-Lightning, load one of these modules using a module load command like:

          module load PyTorch-Lightning/2.1.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyTorch, load one of these modules using a module load command like:

          module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyVCF, load one of these modules using a module load command like:

          module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyVCF3, load one of these modules using a module load command like:

          module load PyVCF3/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyWBGT, load one of these modules using a module load command like:

          module load PyWBGT/1.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyWavelets, load one of these modules using a module load command like:

          module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyYAML, load one of these modules using a module load command like:

          module load PyYAML/6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PyZMQ, load one of these modules using a module load command like:

          module load PyZMQ/25.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using PycURL, load one of these modules using a module load command like:

          module load PycURL/7.45.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pychopper, load one of these modules using a module load command like:

          module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pyomo, load one of these modules using a module load command like:

          module load Pyomo/6.4.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Pysam, load one of these modules using a module load command like:

          module load Pysam/0.22.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Python-bundle-PyPI, load one of these modules using a module load command like:

          module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Python, load one of these modules using a module load command like:

          module load Python/3.11.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QCA, load one of these modules using a module load command like:

          module load QCA/2.3.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QCxMS, load one of these modules using a module load command like:

          module load QCxMS/5.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QD, load one of these modules using a module load command like:

          module load QD/2.3.17-NVHPC-21.2-20160110\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QGIS, load one of these modules using a module load command like:

          module load QGIS/3.28.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QIIME2, load one of these modules using a module load command like:

          module load QIIME2/2023.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QScintilla, load one of these modules using a module load command like:

          module load QScintilla/2.11.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QUAST, load one of these modules using a module load command like:

          module load QUAST/5.2.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qhull, load one of these modules using a module load command like:

          module load Qhull/2020.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qt5, load one of these modules using a module load command like:

          module load Qt5/5.15.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qt5Webkit, load one of these modules using a module load command like:

          module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QtKeychain, load one of these modules using a module load command like:

          module load QtKeychain/0.13.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QtPy, load one of these modules using a module load command like:

          module load QtPy/2.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qtconsole, load one of these modules using a module load command like:

          module load Qtconsole/5.4.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuPath, load one of these modules using a module load command like:

          module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qualimap, load one of these modules using a module load command like:

          module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuantumESPRESSO, load one of these modules using a module load command like:

          module load QuantumESPRESSO/7.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using QuickFF, load one of these modules using a module load command like:

          module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Qwt, load one of these modules using a module load command like:

          module load Qwt/6.2.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-INLA, load one of these modules using a module load command like:

          module load R-INLA/24.01.18-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

          module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R-bundle-CRAN, load one of these modules using a module load command like:

          module load R-bundle-CRAN/2023.12-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R, load one of these modules using a module load command like:

          module load R/4.3.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using R2jags, load one of these modules using a module load command like:

          module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RASPA2, load one of these modules using a module load command like:

          module load RASPA2/2.0.41-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RAxML-NG, load one of these modules using a module load command like:

          module load RAxML-NG/1.2.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RAxML, load one of these modules using a module load command like:

          module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDFlib, load one of these modules using a module load command like:

          module load RDFlib/6.2.0-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDKit, load one of these modules using a module load command like:

          module load RDKit/2022.09.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RDP-Classifier, load one of these modules using a module load command like:

          module load RDP-Classifier/2.13-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RE2, load one of these modules using a module load command like:

          module load RE2/2023-08-01-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RLCard, load one of these modules using a module load command like:

          module load RLCard/1.0.9-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RMBlast, load one of these modules using a module load command like:

          module load RMBlast/2.11.0-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RNA-Bloom, load one of these modules using a module load command like:

          module load RNA-Bloom/2.0.1-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ROOT, load one of these modules using a module load command like:

          module load ROOT/6.26.06-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RSEM, load one of these modules using a module load command like:

          module load RSEM/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RSeQC, load one of these modules using a module load command like:

          module load RSeQC/4.0.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RStudio-Server, load one of these modules using a module load command like:

          module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RTG-Tools, load one of these modules using a module load command like:

          module load RTG-Tools/3.12.1-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Racon, load one of these modules using a module load command like:

          module load Racon/1.5.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RagTag, load one of these modules using a module load command like:

          module load RagTag/2.0.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ragout, load one of these modules using a module load command like:

          module load Ragout/2.3-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RapidJSON, load one of these modules using a module load command like:

          module load RapidJSON/1.1.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Raven, load one of these modules using a module load command like:

          module load Raven/1.8.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ray-project, load one of these modules using a module load command like:

          module load Ray-project/1.13.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ray, load one of these modules using a module load command like:

          module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ReFrame, load one of these modules using a module load command like:

          module load ReFrame/4.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Redis, load one of these modules using a module load command like:

          module load Redis/7.0.8-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RegTools, load one of these modules using a module load command like:

          module load RegTools/1.0.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RepeatMasker, load one of these modules using a module load command like:

          module load RepeatMasker/4.1.2-p1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ResistanceGA, load one of these modules using a module load command like:

          module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RevBayes, load one of these modules using a module load command like:

          module load RevBayes/1.2.1-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rgurobi, load one of these modules using a module load command like:

          module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RheoTool, load one of these modules using a module load command like:

          module load RheoTool/5.0-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rmath, load one of these modules using a module load command like:

          module load Rmath/4.3.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

          The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using RnBeads, load one of these modules using a module load command like:

          module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Roary, load one of these modules using a module load command like:

          module load Roary/3.13.0-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Ruby, load one of these modules using a module load command like:

          module load Ruby/3.0.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Rust, load one of these modules using a module load command like:

          module load Rust/1.75.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SAMtools, load one of these modules using a module load command like:

          module load SAMtools/1.18-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SBCL, load one of these modules using a module load command like:

          module load SBCL/2.2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCENIC, load one of these modules using a module load command like:

          module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCGid, load one of these modules using a module load command like:

          module load SCGid/0.9b0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCOTCH, load one of these modules using a module load command like:

          module load SCOTCH/7.0.3-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCons, load one of these modules using a module load command like:

          module load SCons/4.5.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SCopeLoomR, load one of these modules using a module load command like:

          module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SDL2, load one of these modules using a module load command like:

          module load SDL2/2.28.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SDSL, load one of these modules using a module load command like:

          module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SEACells, load one of these modules using a module load command like:

          module load SEACells/20230731-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SECAPR, load one of these modules using a module load command like:

          module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SELFIES, load one of these modules using a module load command like:

          module load SELFIES/2.1.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SEPP, load one of these modules using a module load command like:

          module load SEPP/4.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SHAP, load one of these modules using a module load command like:

          module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SISSO++, load one of these modules using a module load command like:

          module load SISSO++/1.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SISSO, load one of these modules using a module load command like:

          module load SISSO/3.1-20220324-iimpi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SKESA, load one of these modules using a module load command like:

          module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLATEC, load one of these modules using a module load command like:

          module load SLATEC/4.1-GCC-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLEPc, load one of these modules using a module load command like:

          module load SLEPc/3.18.2-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SLiM, load one of these modules using a module load command like:

          module load SLiM/3.4-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMAP, load one of these modules using a module load command like:

          module load SMAP/4.6.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMC++, load one of these modules using a module load command like:

          module load SMC++/1.15.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SMV, load one of these modules using a module load command like:

          module load SMV/6.7.17-iccifort-2020.4.304\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP-ESA-python, load one of these modules using a module load command like:

          module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP-ESA, load one of these modules using a module load command like:

          module load SNAP-ESA/9.0.0-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SNAP, load one of these modules using a module load command like:

          module load SNAP/2.0.1-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

          module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPAdes, load one of these modules using a module load command like:

          module load SPAdes/3.15.5-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPM, load one of these modules using a module load command like:

          module load SPM/12.5_r7771-MATLAB-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SPOTPY, load one of these modules using a module load command like:

          module load SPOTPY/1.5.14-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SQLite, load one of these modules using a module load command like:

          module load SQLite/3.43.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRA-Toolkit, load one of these modules using a module load command like:

          module load SRA-Toolkit/3.0.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRPRISM, load one of these modules using a module load command like:

          module load SRPRISM/3.1.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SRST2, load one of these modules using a module load command like:

          module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SSPACE_Basic, load one of these modules using a module load command like:

          module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SSW, load one of these modules using a module load command like:

          module load SSW/1.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STACEY, load one of these modules using a module load command like:

          module load STACEY/1.2.5-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STAR, load one of these modules using a module load command like:

          module load STAR/2.7.11a-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STREAM, load one of these modules using a module load command like:

          module load STREAM/5.10-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

          The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using STRique, load one of these modules using a module load command like:

          module load STRique/0.4.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SUNDIALS, load one of these modules using a module load command like:

          module load SUNDIALS/6.6.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SUPPA, load one of these modules using a module load command like:

          module load SUPPA/2.3-20231005-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SVIM, load one of these modules using a module load command like:

          module load SVIM/2.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SWIG, load one of these modules using a module load command like:

          module load SWIG/4.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sabre, load one of these modules using a module load command like:

          module load Sabre/2013-09-28-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sailfish, load one of these modules using a module load command like:

          module load Sailfish/0.10.1-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Salmon, load one of these modules using a module load command like:

          module load Salmon/1.9.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sambamba, load one of these modules using a module load command like:

          module load Sambamba/1.0.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Satsuma2, load one of these modules using a module load command like:

          module load Satsuma2/20220304-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ScaFaCoS, load one of these modules using a module load command like:

          module load ScaFaCoS/1.0.1-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ScaLAPACK, load one of these modules using a module load command like:

          module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SciPy-bundle, load one of these modules using a module load command like:

          module load SciPy-bundle/2023.11-gfbf-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Seaborn, load one of these modules using a module load command like:

          module load Seaborn/0.13.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SemiBin, load one of these modules using a module load command like:

          module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sentence-Transformers, load one of these modules using a module load command like:

          module load Sentence-Transformers/2.2.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SentencePiece, load one of these modules using a module load command like:

          module load SentencePiece/0.1.99-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqAn, load one of these modules using a module load command like:

          module load SeqAn/2.4.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqKit, load one of these modules using a module load command like:

          module load SeqKit/2.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeqLib, load one of these modules using a module load command like:

          module load SeqLib/1.2.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Serf, load one of these modules using a module load command like:

          module load Serf/1.3.9-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Seurat, load one of these modules using a module load command like:

          module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratData, load one of these modules using a module load command like:

          module load SeuratData/20210514-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratDisk, load one of these modules using a module load command like:

          module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SeuratWrappers, load one of these modules using a module load command like:

          module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Shapely, load one of these modules using a module load command like:

          module load Shapely/2.0.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Shasta, load one of these modules using a module load command like:

          module load Shasta/0.8.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Short-Pair, load one of these modules using a module load command like:

          module load Short-Pair/20170125-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SiNVICT, load one of these modules using a module load command like:

          module load SiNVICT/1.0-20180817-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sibelia, load one of these modules using a module load command like:

          module load Sibelia/3.0.7-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimNIBS, load one of these modules using a module load command like:

          module load SimNIBS/3.2.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimPEG, load one of these modules using a module load command like:

          module load SimPEG/0.18.1-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimpleElastix, load one of these modules using a module load command like:

          module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SimpleITK, load one of these modules using a module load command like:

          module load SimpleITK/2.1.1.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SlamDunk, load one of these modules using a module load command like:

          module load SlamDunk/0.4.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Sniffles, load one of these modules using a module load command like:

          module load Sniffles/2.0.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SoX, load one of these modules using a module load command like:

          module load SoX/14.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Spark, load one of these modules using a module load command like:

          module load Spark/3.5.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SpatialDE, load one of these modules using a module load command like:

          module load SpatialDE/1.1.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Spyder, load one of these modules using a module load command like:

          module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SqueezeMeta, load one of these modules using a module load command like:

          module load SqueezeMeta/1.5.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Squidpy, load one of these modules using a module load command like:

          module load Squidpy/1.2.2-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Stacks, load one of these modules using a module load command like:

          module load Stacks/2.53-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Stata, load one of these modules using a module load command like:

          module load Stata/15\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Statistics-R, load one of these modules using a module load command like:

          module load Statistics-R/0.34-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

          The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using StringTie, load one of these modules using a module load command like:

          module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Structure, load one of these modules using a module load command like:

          module load Structure/2.3.4-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Structure_threader, load one of these modules using a module load command like:

          module load Structure_threader/1.3.10-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuAVE-biomat, load one of these modules using a module load command like:

          module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Subread, load one of these modules using a module load command like:

          module load Subread/2.0.3-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Subversion, load one of these modules using a module load command like:

          module load Subversion/1.14.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuiteSparse, load one of these modules using a module load command like:

          module load SuiteSparse/7.1.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuperLU, load one of these modules using a module load command like:

          module load SuperLU/5.2.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using SuperLU_DIST, load one of these modules using a module load command like:

          module load SuperLU_DIST/8.1.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Szip, load one of these modules using a module load command like:

          module load Szip/2.1.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TALON, load one of these modules using a module load command like:

          module load TALON/5.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TAMkin, load one of these modules using a module load command like:

          module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TCLAP, load one of these modules using a module load command like:

          module load TCLAP/1.2.4-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

          module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TEtranscripts, load one of these modules using a module load command like:

          module load TEtranscripts/2.2.0-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TOBIAS, load one of these modules using a module load command like:

          module load TOBIAS/0.12.12-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TOPAS, load one of these modules using a module load command like:

          module load TOPAS/3.9-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TRF, load one of these modules using a module load command like:

          module load TRF/4.09.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TRUST4, load one of these modules using a module load command like:

          module load TRUST4/1.0.6-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tcl, load one of these modules using a module load command like:

          module load Tcl/8.6.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TensorFlow, load one of these modules using a module load command like:

          module load TensorFlow/2.13.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Theano, load one of these modules using a module load command like:

          module load Theano/1.1.2-intel-2021b-PyMC\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tk, load one of these modules using a module load command like:

          module load Tk/8.6.13-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tkinter, load one of these modules using a module load command like:

          module load Tkinter/3.11.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Togl, load one of these modules using a module load command like:

          module load Togl/2.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Tombo, load one of these modules using a module load command like:

          module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TopHat, load one of these modules using a module load command like:

          module load TopHat/2.1.2-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TransDecoder, load one of these modules using a module load command like:

          module load TransDecoder/5.5.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TranscriptClean, load one of these modules using a module load command like:

          module load TranscriptClean/2.0.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Transformers, load one of these modules using a module load command like:

          module load Transformers/4.30.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TreeMix, load one of these modules using a module load command like:

          module load TreeMix/1.13-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trilinos, load one of these modules using a module load command like:

          module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trim_Galore, load one of these modules using a module load command like:

          module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trimmomatic, load one of these modules using a module load command like:

          module load Trimmomatic/0.39-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trinity, load one of these modules using a module load command like:

          module load Trinity/2.15.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Triton, load one of these modules using a module load command like:

          module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Trycycler, load one of these modules using a module load command like:

          module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using TurboVNC, load one of these modules using a module load command like:

          module load TurboVNC/2.2.6-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCC, load one of these modules using a module load command like:

          module load UCC/1.2.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCLUST, load one of these modules using a module load command like:

          module load UCLUST/1.2.22q-i86linux64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCX-CUDA, load one of these modules using a module load command like:

          module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UCX, load one of these modules using a module load command like:

          module load UCX/1.15.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UDUNITS, load one of these modules using a module load command like:

          module load UDUNITS/2.2.28-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UFL, load one of these modules using a module load command like:

          module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UMI-tools, load one of these modules using a module load command like:

          module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UQTk, load one of these modules using a module load command like:

          module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using USEARCH, load one of these modules using a module load command like:

          module load USEARCH/11.0.667-i86linux32\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UnZip, load one of these modules using a module load command like:

          module load UnZip/6.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

          The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using UniFrac, load one of these modules using a module load command like:

          module load UniFrac/1.3.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Unicycler, load one of these modules using a module load command like:

          module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Unidecode, load one of these modules using a module load command like:

          module load Unidecode/1.3.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VASP, load one of these modules using a module load command like:

          module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VBZ-Compression, load one of these modules using a module load command like:

          module load VBZ-Compression/1.0.3-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VCFtools, load one of these modules using a module load command like:

          module load VCFtools/0.1.16-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VEP, load one of these modules using a module load command like:

          module load VEP/107-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VESTA, load one of these modules using a module load command like:

          module load VESTA/3.5.8-gtk3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VMD, load one of these modules using a module load command like:

          module load VMD/1.9.4a51-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VMTK, load one of these modules using a module load command like:

          module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VSCode, load one of these modules using a module load command like:

          module load VSCode/1.85.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VSEARCH, load one of these modules using a module load command like:

          module load VSEARCH/2.22.1-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VTK, load one of these modules using a module load command like:

          module load VTK/9.2.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VTune, load one of these modules using a module load command like:

          module load VTune/2019_update2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Vala, load one of these modules using a module load command like:

          module load Vala/0.52.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Valgrind, load one of these modules using a module load command like:

          module load Valgrind/3.20.0-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VarScan, load one of these modules using a module load command like:

          module load VarScan/2.4.4-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Velvet, load one of these modules using a module load command like:

          module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VirSorter2, load one of these modules using a module load command like:

          module load VirSorter2/2.2.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using VisPy, load one of these modules using a module load command like:

          module load VisPy/0.12.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Voro++, load one of these modules using a module load command like:

          module load Voro++/0.4.6-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WFA2, load one of these modules using a module load command like:

          module load WFA2/2.3.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WHAM, load one of these modules using a module load command like:

          module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WIEN2k, load one of these modules using a module load command like:

          module load WIEN2k/21.1-intel-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WPS, load one of these modules using a module load command like:

          module load WPS/4.1-intel-2019b-dmpar\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WRF, load one of these modules using a module load command like:

          module load WRF/4.1.3-intel-2019b-dmpar\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Wannier90, load one of these modules using a module load command like:

          module load Wannier90/3.1.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Wayland, load one of these modules using a module load command like:

          module load Wayland/1.22.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Waylandpp, load one of these modules using a module load command like:

          module load Waylandpp/1.0.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WebKitGTK+, load one of these modules using a module load command like:

          module load WebKitGTK+/2.37.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WhatsHap, load one of these modules using a module load command like:

          module load WhatsHap/1.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Winnowmap, load one of these modules using a module load command like:

          module load Winnowmap/1.0-GCC-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using WisecondorX, load one of these modules using a module load command like:

          module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

          The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using X11, load one of these modules using a module load command like:

          module load X11/20230603-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XCFun, load one of these modules using a module load command like:

          module load XCFun/2.1.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XCrySDen, load one of these modules using a module load command like:

          module load XCrySDen/1.6.2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XGBoost, load one of these modules using a module load command like:

          module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XML-Compile, load one of these modules using a module load command like:

          module load XML-Compile/1.63-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XML-LibXML, load one of these modules using a module load command like:

          module load XML-LibXML/2.0208-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XZ, load one of these modules using a module load command like:

          module load XZ/5.4.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Xerces-C++, load one of these modules using a module load command like:

          module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using XlsxWriter, load one of these modules using a module load command like:

          module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Xvfb, load one of these modules using a module load command like:

          module load Xvfb/21.1.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YACS, load one of these modules using a module load command like:

          module load YACS/0.1.8-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YANK, load one of these modules using a module load command like:

          module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

          The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using YAXT, load one of these modules using a module load command like:

          module load YAXT/0.9.1-gompi-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Yambo, load one of these modules using a module load command like:

          module load Yambo/5.1.2-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Yasm, load one of these modules using a module load command like:

          module load Yasm/1.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Z3, load one of these modules using a module load command like:

          module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zeo++, load one of these modules using a module load command like:

          module load Zeo++/0.3-intel-compilers-2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ZeroMQ, load one of these modules using a module load command like:

          module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zip, load one of these modules using a module load command like:

          module load Zip/3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using Zopfli, load one of these modules using a module load command like:

          module load Zopfli/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

          The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using adjustText, load one of these modules using a module load command like:

          module load adjustText/0.7.3-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using aiohttp, load one of these modules using a module load command like:

          module load aiohttp/3.8.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alevin-fry, load one of these modules using a module load command like:

          module load alevin-fry/0.4.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alleleCount, load one of these modules using a module load command like:

          module load alleleCount/4.3.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alleleIntegrator, load one of these modules using a module load command like:

          module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using alsa-lib, load one of these modules using a module load command like:

          module load alsa-lib/1.2.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anadama2, load one of these modules using a module load command like:

          module load anadama2/0.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using angsd, load one of these modules using a module load command like:

          module load angsd/0.940-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anndata, load one of these modules using a module load command like:

          module load anndata/0.10.5.post1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ant, load one of these modules using a module load command like:

          module load ant/1.10.12-Java-17\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

          The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using antiSMASH, load one of these modules using a module load command like:

          module load antiSMASH/6.0.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using anvio, load one of these modules using a module load command like:

          module load anvio/8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using any2fasta, load one of these modules using a module load command like:

          module load any2fasta/0.4.2-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using apex, load one of these modules using a module load command like:

          module load apex/20210420-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using archspec, load one of these modules using a module load command like:

          module load archspec/0.1.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

          The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using argtable, load one of these modules using a module load command like:

          module load argtable/2.13-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using aria2, load one of these modules using a module load command like:

          module load aria2/1.35.0-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arpack-ng, load one of these modules using a module load command like:

          module load arpack-ng/3.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arrow-R, load one of these modules using a module load command like:

          module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using arrow, load one of these modules using a module load command like:

          module load arrow/0.17.1-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using at-spi2-atk, load one of these modules using a module load command like:

          module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

          The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using at-spi2-core, load one of these modules using a module load command like:

          module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using atools, load one of these modules using a module load command like:

          module load atools/1.5.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attr, load one of these modules using a module load command like:

          module load attr/2.5.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attrdict, load one of these modules using a module load command like:

          module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using attrdict3, load one of these modules using a module load command like:

          module load attrdict3/2.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

          The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using augur, load one of these modules using a module load command like:

          module load augur/7.0.2-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

          The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using autopep8, load one of these modules using a module load command like:

          module load autopep8/2.0.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using awscli, load one of these modules using a module load command like:

          module load awscli/2.11.21-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using babl, load one of these modules using a module load command like:

          module load babl/0.1.86-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bam-readcount, load one of these modules using a module load command like:

          module load bam-readcount/0.8.0-GCC-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bamFilters, load one of these modules using a module load command like:

          module load bamFilters/2022-06-30-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using barrnap, load one of these modules using a module load command like:

          module load barrnap/0.9-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using basemap, load one of these modules using a module load command like:

          module load basemap/1.3.9-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcbio-gff, load one of these modules using a module load command like:

          module load bcbio-gff/0.7.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcgTree, load one of these modules using a module load command like:

          module load bcgTree/1.2.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcl-convert, load one of these modules using a module load command like:

          module load bcl-convert/4.0.3-2el7.x86_64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bcl2fastq2, load one of these modules using a module load command like:

          module load bcl2fastq2/2.20.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using beagle-lib, load one of these modules using a module load command like:

          module load beagle-lib/4.0.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using binutils, load one of these modules using a module load command like:

          module load binutils/2.40-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biobakery-workflows, load one of these modules using a module load command like:

          module load biobakery-workflows/3.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biobambam2, load one of these modules using a module load command like:

          module load biobambam2/2.0.185-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biogeme, load one of these modules using a module load command like:

          module load biogeme/3.2.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

          The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using biom-format, load one of these modules using a module load command like:

          module load biom-format/2.1.15-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bmtagger, load one of these modules using a module load command like:

          module load bmtagger/3.101-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bokeh, load one of these modules using a module load command like:

          module load bokeh/3.2.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using boto3, load one of these modules using a module load command like:

          module load boto3/1.34.10-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bpp, load one of these modules using a module load command like:

          module load bpp/4.4.0-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using btllib, load one of these modules using a module load command like:

          module load btllib/1.7.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

          The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using build, load one of these modules using a module load command like:

          module load build/0.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using buildenv, load one of these modules using a module load command like:

          module load buildenv/default-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using buildingspy, load one of these modules using a module load command like:

          module load buildingspy/4.0.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bwa-meth, load one of these modules using a module load command like:

          module load bwa-meth/0.2.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bwidget, load one of these modules using a module load command like:

          module load bwidget/1.9.15-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bx-python, load one of these modules using a module load command like:

          module load bx-python/0.10.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using bzip2, load one of these modules using a module load command like:

          module load bzip2/1.0.8-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

          The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using c-ares, load one of these modules using a module load command like:

          module load c-ares/1.18.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cURL, load one of these modules using a module load command like:

          module load cURL/8.3.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cairo, load one of these modules using a module load command like:

          module load cairo/1.17.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using canu, load one of these modules using a module load command like:

          module load canu/2.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using carputils, load one of these modules using a module load command like:

          module load carputils/20210513-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ccache, load one of these modules using a module load command like:

          module load ccache/4.6.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cctbx-base, load one of these modules using a module load command like:

          module load cctbx-base/2023.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdbfasta, load one of these modules using a module load command like:

          module load cdbfasta/0.99-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdo-bindings, load one of these modules using a module load command like:

          module load cdo-bindings/1.5.7-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cdsapi, load one of these modules using a module load command like:

          module load cdsapi/0.5.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cell2location, load one of these modules using a module load command like:

          module load cell2location/0.05-alpha-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cffi, load one of these modules using a module load command like:

          module load cffi/1.15.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using chemprop, load one of these modules using a module load command like:

          module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using chewBBACA, load one of these modules using a module load command like:

          module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cicero, load one of these modules using a module load command like:

          module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cimfomfa, load one of these modules using a module load command like:

          module load cimfomfa/22.273-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using code-cli, load one of these modules using a module load command like:

          module load code-cli/1.85.1-x64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using code-server, load one of these modules using a module load command like:

          module load code-server/4.9.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

          The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using colossalai, load one of these modules using a module load command like:

          module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using conan, load one of these modules using a module load command like:

          module load conan/1.60.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using configurable-http-proxy, load one of these modules using a module load command like:

          module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cooler, load one of these modules using a module load command like:

          module load cooler/0.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using coverage, load one of these modules using a module load command like:

          module load coverage/7.2.7-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cppy, load one of these modules using a module load command like:

          module load cppy/1.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cpu_features, load one of these modules using a module load command like:

          module load cpu_features/0.6.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cryoDRGN, load one of these modules using a module load command like:

          module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cryptography, load one of these modules using a module load command like:

          module load cryptography/41.0.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuDNN, load one of these modules using a module load command like:

          module load cuDNN/8.9.2.26-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuTENSOR, load one of these modules using a module load command like:

          module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cutadapt, load one of these modules using a module load command like:

          module load cutadapt/4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cuteSV, load one of these modules using a module load command like:

          module load cuteSV/2.0.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using cython-blis, load one of these modules using a module load command like:

          module load cython-blis/0.9.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dask, load one of these modules using a module load command like:

          module load dask/2023.12.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dbus-glib, load one of these modules using a module load command like:

          module load dbus-glib/0.112-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dclone, load one of these modules using a module load command like:

          module load dclone/2.3-0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deal.II, load one of these modules using a module load command like:

          module load deal.II/9.3.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

          The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using decona, load one of these modules using a module load command like:

          module load decona/0.1.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deepTools, load one of these modules using a module load command like:

          module load deepTools/3.5.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using deepdiff, load one of these modules using a module load command like:

          module load deepdiff/6.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using detectron2, load one of these modules using a module load command like:

          module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

          The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using devbio-napari, load one of these modules using a module load command like:

          module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dicom2nifti, load one of these modules using a module load command like:

          module load dicom2nifti/2.3.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dijitso, load one of these modules using a module load command like:

          module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dill, load one of these modules using a module load command like:

          module load dill/0.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dlib, load one of these modules using a module load command like:

          module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dm-haiku, load one of these modules using a module load command like:

          module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dm-tree, load one of these modules using a module load command like:

          module load dm-tree/0.1.8-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dorado, load one of these modules using a module load command like:

          module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

          The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using double-conversion, load one of these modules using a module load command like:

          module load double-conversion/3.3.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using drmaa-python, load one of these modules using a module load command like:

          module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dtcwt, load one of these modules using a module load command like:

          module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using duplex-tools, load one of these modules using a module load command like:

          module load duplex-tools/0.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

          The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using dynesty, load one of these modules using a module load command like:

          module load dynesty/2.1.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using eSpeak-NG, load one of these modules using a module load command like:

          module load eSpeak-NG/1.50-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ebGSEA, load one of these modules using a module load command like:

          module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ecCodes, load one of these modules using a module load command like:

          module load ecCodes/2.24.2-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using edlib, load one of these modules using a module load command like:

          module load edlib/1.3.9-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

          The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using eggnog-mapper, load one of these modules using a module load command like:

          module load eggnog-mapper/2.1.10-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

          The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using einops, load one of these modules using a module load command like:

          module load einops/0.4.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using elfutils, load one of these modules using a module load command like:

          module load elfutils/0.187-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

          The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using elprep, load one of these modules using a module load command like:

          module load elprep/5.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using enchant-2, load one of these modules using a module load command like:

          module load enchant-2/2.3.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using epiScanpy, load one of these modules using a module load command like:

          module load epiScanpy/0.4.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using exiv2, load one of these modules using a module load command like:

          module load exiv2/0.27.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using expat, load one of these modules using a module load command like:

          module load expat/2.5.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using expecttest, load one of these modules using a module load command like:

          module load expecttest/0.1.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fasta-reader, load one of these modules using a module load command like:

          module load fasta-reader/3.0.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastahack, load one of these modules using a module load command like:

          module load fastahack/1.0.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastai, load one of these modules using a module load command like:

          module load fastai/2.7.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fastp, load one of these modules using a module load command like:

          module load fastp/0.23.2-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fermi-lite, load one of these modules using a module load command like:

          module load fermi-lite/20190320-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

          The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using festival, load one of these modules using a module load command like:

          module load festival/2.5.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fetchMG, load one of these modules using a module load command like:

          module load fetchMG/1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ffnvcodec, load one of these modules using a module load command like:

          module load ffnvcodec/12.0.16.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

          The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using file, load one of these modules using a module load command like:

          module load file/5.43-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using filevercmp, load one of these modules using a module load command like:

          module load filevercmp/20191210-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using finder, load one of these modules using a module load command like:

          module load finder/1.1.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flair-NLP, load one of these modules using a module load command like:

          module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flatbuffers-python, load one of these modules using a module load command like:

          module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flatbuffers, load one of these modules using a module load command like:

          module load flatbuffers/23.5.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flex, load one of these modules using a module load command like:

          module load flex/2.6.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flit, load one of these modules using a module load command like:

          module load flit/3.9.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using flowFDA, load one of these modules using a module load command like:

          module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fmt, load one of these modules using a module load command like:

          module load fmt/10.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fontconfig, load one of these modules using a module load command like:

          module load fontconfig/2.14.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

          The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using foss, load one of these modules using a module load command like:

          module load foss/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fosscuda, load one of these modules using a module load command like:

          module load fosscuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freebayes, load one of these modules using a module load command like:

          module load freebayes/1.3.5-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freeglut, load one of these modules using a module load command like:

          module load freeglut/3.2.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freetype-py, load one of these modules using a module load command like:

          module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

          The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using freetype, load one of these modules using a module load command like:

          module load freetype/2.13.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using fsom, load one of these modules using a module load command like:

          module load fsom/20151117-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using funannotate, load one of these modules using a module load command like:

          module load funannotate/1.8.13-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2clib, load one of these modules using a module load command like:

          module load g2clib/1.6.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2lib, load one of these modules using a module load command like:

          module load g2lib/3.1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

          The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using g2log, load one of these modules using a module load command like:

          module load g2log/1.0-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

          The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using garnett, load one of these modules using a module load command like:

          module load garnett/0.1.20-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gawk, load one of these modules using a module load command like:

          module load gawk/5.1.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gbasis, load one of these modules using a module load command like:

          module load gbasis/20210904-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gc, load one of these modules using a module load command like:

          module load gc/8.2.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcccuda, load one of these modules using a module load command like:

          module load gcccuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcloud, load one of these modules using a module load command like:

          module load gcloud/382.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gcsfs, load one of these modules using a module load command like:

          module load gcsfs/2023.12.2.post1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gdbm, load one of these modules using a module load command like:

          module load gdbm/1.18.1-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gdc-client, load one of these modules using a module load command like:

          module load gdc-client/1.6.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gengetopt, load one of these modules using a module load command like:

          module load gengetopt/2.23-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using genomepy, load one of these modules using a module load command like:

          module load genomepy/0.15.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using genozip, load one of these modules using a module load command like:

          module load genozip/13.0.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gensim, load one of these modules using a module load command like:

          module load gensim/4.2.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using geopandas, load one of these modules using a module load command like:

          module load geopandas/0.12.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gettext, load one of these modules using a module load command like:

          module load gettext/0.22-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gexiv2, load one of these modules using a module load command like:

          module load gexiv2/0.12.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gfbf, load one of these modules using a module load command like:

          module load gfbf/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gffread, load one of these modules using a module load command like:

          module load gffread/0.12.7-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gffutils, load one of these modules using a module load command like:

          module load gffutils/0.12-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gflags, load one of these modules using a module load command like:

          module load gflags/2.2.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using giflib, load one of these modules using a module load command like:

          module load giflib/5.2.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using git-lfs, load one of these modules using a module load command like:

          module load git-lfs/3.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

          The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using git, load one of these modules using a module load command like:

          module load git/2.42.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glew, load one of these modules using a module load command like:

          module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glib-networking, load one of these modules using a module load command like:

          module load glib-networking/2.72.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glibc, load one of these modules using a module load command like:

          module load glibc/2.30-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

          The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using glog, load one of these modules using a module load command like:

          module load glog/0.6.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gmpy2, load one of these modules using a module load command like:

          module load gmpy2/2.1.5-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gmsh, load one of these modules using a module load command like:

          module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gnuplot, load one of these modules using a module load command like:

          module load gnuplot/5.4.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

          The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using goalign, load one of these modules using a module load command like:

          module load goalign/0.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gobff, load one of these modules using a module load command like:

          module load gobff/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gomkl, load one of these modules using a module load command like:

          module load gomkl/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gompi, load one of these modules using a module load command like:

          module load gompi/2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gompic, load one of these modules using a module load command like:

          module load gompic/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using googletest, load one of these modules using a module load command like:

          module load googletest/1.13.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gotree, load one of these modules using a module load command like:

          module load gotree/0.4.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gperf, load one of these modules using a module load command like:

          module load gperf/3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gperftools, load one of these modules using a module load command like:

          module load gperftools/2.14-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gpustat, load one of these modules using a module load command like:

          module load gpustat/0.6.0-gcccuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using graphite2, load one of these modules using a module load command like:

          module load graphite2/1.3.14-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using graphviz-python, load one of these modules using a module load command like:

          module load graphviz-python/0.20.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using grid, load one of these modules using a module load command like:

          module load grid/20220610-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using groff, load one of these modules using a module load command like:

          module load groff/1.22.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using gzip, load one of these modules using a module load command like:

          module load gzip/1.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using h5netcdf, load one of these modules using a module load command like:

          module load h5netcdf/1.2.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using h5py, load one of these modules using a module load command like:

          module load h5py/3.9.0-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

          The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using harmony, load one of these modules using a module load command like:

          module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hatchling, load one of these modules using a module load command like:

          module load hatchling/1.18.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

          The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using help2man, load one of these modules using a module load command like:

          module load help2man/1.49.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hierfstat, load one of these modules using a module load command like:

          module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hifiasm, load one of these modules using a module load command like:

          module load hifiasm/0.19.7-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hiredis, load one of these modules using a module load command like:

          module load hiredis/1.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

          The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using histolab, load one of these modules using a module load command like:

          module load histolab/0.4.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hmmlearn, load one of these modules using a module load command like:

          module load hmmlearn/0.3.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

          The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using horton, load one of these modules using a module load command like:

          module load horton/2.1.1-intel-2020a-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

          The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using how_are_we_stranded_here, load one of these modules using a module load command like:

          module load how_are_we_stranded_here/1.0.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

          The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using humann, load one of these modules using a module load command like:

          module load humann/3.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hunspell, load one of these modules using a module load command like:

          module load hunspell/1.7.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hwloc, load one of these modules using a module load command like:

          module load hwloc/2.9.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hyperopt, load one of these modules using a module load command like:

          module load hyperopt/0.2.5-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using hypothesis, load one of these modules using a module load command like:

          module load hypothesis/6.90.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iccifort, load one of these modules using a module load command like:

          module load iccifort/2020.4.304\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iccifortcuda, load one of these modules using a module load command like:

          module load iccifortcuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ichorCNA, load one of these modules using a module load command like:

          module load ichorCNA/0.3.2-20191219-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using idemux, load one of these modules using a module load command like:

          module load idemux/0.1.6-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using igraph, load one of these modules using a module load command like:

          module load igraph/0.10.10-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

          The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using igvShiny, load one of these modules using a module load command like:

          module load igvShiny/20240112-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iibff, load one of these modules using a module load command like:

          module load iibff/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iimpi, load one of these modules using a module load command like:

          module load iimpi/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iimpic, load one of these modules using a module load command like:

          module load iimpic/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imagecodecs, load one of these modules using a module load command like:

          module load imagecodecs/2022.9.26-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imageio, load one of these modules using a module load command like:

          module load imageio/2.22.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imbalanced-learn, load one of these modules using a module load command like:

          module load imbalanced-learn/0.10.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imgaug, load one of these modules using a module load command like:

          module load imgaug/0.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imkl-FFTW, load one of these modules using a module load command like:

          module load imkl-FFTW/2023.1.0-iimpi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imkl, load one of these modules using a module load command like:

          module load imkl/2023.1.0-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using impi, load one of these modules using a module load command like:

          module load impi/2021.9.0-intel-compilers-2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

          The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using imutils, load one of these modules using a module load command like:

          module load imutils/0.5.4-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

          The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using inferCNV, load one of these modules using a module load command like:

          module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using infercnvpy, load one of these modules using a module load command like:

          module load infercnvpy/0.4.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

          The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using inflection, load one of these modules using a module load command like:

          module load inflection/1.3.5-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intel-compilers, load one of these modules using a module load command like:

          module load intel-compilers/2023.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intel, load one of these modules using a module load command like:

          module load intel/2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intelcuda, load one of these modules using a module load command like:

          module load intelcuda/2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intervaltree-python, load one of these modules using a module load command like:

          module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intervaltree, load one of these modules using a module load command like:

          module load intervaltree/0.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using intltool, load one of these modules using a module load command like:

          module load intltool/0.51.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iodata, load one of these modules using a module load command like:

          module load iodata/1.0.0a2-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iomkl, load one of these modules using a module load command like:

          module load iomkl/2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using iompi, load one of these modules using a module load command like:

          module load iompi/2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using isoCirc, load one of these modules using a module load command like:

          module load isoCirc/1.0.4-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jax, load one of these modules using a module load command like:

          module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jbigkit, load one of these modules using a module load command like:

          module load jbigkit/2.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jemalloc, load one of these modules using a module load command like:

          module load jemalloc/5.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jobcli, load one of these modules using a module load command like:

          module load jobcli/0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using joypy, load one of these modules using a module load command like:

          module load joypy/0.2.4-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using json-c, load one of these modules using a module load command like:

          module load json-c/0.16-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

          module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-server-proxy, load one of these modules using a module load command like:

          module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jupyter-server, load one of these modules using a module load command like:

          module load jupyter-server/2.7.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using jxrlib, load one of these modules using a module load command like:

          module load jxrlib/1.1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kallisto, load one of these modules using a module load command like:

          module load kallisto/0.48.0-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kb-python, load one of these modules using a module load command like:

          module load kb-python/0.27.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kim-api, load one of these modules using a module load command like:

          module load kim-api/2.3.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kineto, load one of these modules using a module load command like:

          module load kineto/0.4.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kma, load one of these modules using a module load command like:

          module load kma/1.2.22-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

          The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using kneaddata, load one of these modules using a module load command like:

          module load kneaddata/0.12.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

          The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using krbalancing, load one of these modules using a module load command like:

          module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lancet, load one of these modules using a module load command like:

          module load lancet/1.1.0-iccifort-2019.5.281\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lavaan, load one of these modules using a module load command like:

          module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

          The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using leafcutter, load one of these modules using a module load command like:

          module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using legacy-job-wrappers, load one of these modules using a module load command like:

          module load legacy-job-wrappers/0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using leidenalg, load one of these modules using a module load command like:

          module load leidenalg/0.10.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lftp, load one of these modules using a module load command like:

          module load lftp/4.9.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libBigWig, load one of these modules using a module load command like:

          module load libBigWig/0.4.4-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libFLAME, load one of these modules using a module load command like:

          module load libFLAME/5.2.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libGLU, load one of these modules using a module load command like:

          module load libGLU/9.0.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libRmath, load one of these modules using a module load command like:

          module load libRmath/4.1.0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libaec, load one of these modules using a module load command like:

          module load libaec/1.0.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libaio, load one of these modules using a module load command like:

          module load libaio/0.3.113-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libarchive, load one of these modules using a module load command like:

          module load libarchive/3.7.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libavif, load one of these modules using a module load command like:

          module load libavif/0.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcdms, load one of these modules using a module load command like:

          module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcerf, load one of these modules using a module load command like:

          module load libcerf/2.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libcint, load one of these modules using a module load command like:

          module load libcint/5.5.0-gfbf-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdap, load one of these modules using a module load command like:

          module load libdap/3.20.7-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libde265, load one of these modules using a module load command like:

          module load libde265/1.0.11-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdeflate, load one of these modules using a module load command like:

          module load libdeflate/1.19-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdrm, load one of these modules using a module load command like:

          module load libdrm/2.4.115-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libdrs, load one of these modules using a module load command like:

          module load libdrs/3.1.2-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libepoxy, load one of these modules using a module load command like:

          module load libepoxy/1.5.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libev, load one of these modules using a module load command like:

          module load libev/4.33-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libevent, load one of these modules using a module load command like:

          module load libevent/2.1.12-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libfabric, load one of these modules using a module load command like:

          module load libfabric/1.19.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libffi, load one of these modules using a module load command like:

          module load libffi/3.4.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgcrypt, load one of these modules using a module load command like:

          module load libgcrypt/1.9.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgd, load one of these modules using a module load command like:

          module load libgd/2.3.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgeotiff, load one of these modules using a module load command like:

          module load libgeotiff/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgit2, load one of these modules using a module load command like:

          module load libgit2/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libglvnd, load one of these modules using a module load command like:

          module load libglvnd/1.6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgpg-error, load one of these modules using a module load command like:

          module load libgpg-error/1.42-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libgpuarray, load one of these modules using a module load command like:

          module load libgpuarray/0.7.6-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libharu, load one of these modules using a module load command like:

          module load libharu/2.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libheif, load one of these modules using a module load command like:

          module load libheif/1.16.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libiconv, load one of these modules using a module load command like:

          module load libiconv/1.17-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libidn, load one of these modules using a module load command like:

          module load libidn/1.38-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libidn2, load one of these modules using a module load command like:

          module load libidn2/2.3.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libjpeg-turbo, load one of these modules using a module load command like:

          module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libjxl, load one of these modules using a module load command like:

          module load libjxl/0.8.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libleidenalg, load one of these modules using a module load command like:

          module load libleidenalg/0.11.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmad, load one of these modules using a module load command like:

          module load libmad/0.15.1b-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmatheval, load one of these modules using a module load command like:

          module load libmatheval/1.1.11-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmaus2, load one of these modules using a module load command like:

          module load libmaus2/2.0.813-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libmypaint, load one of these modules using a module load command like:

          module load libmypaint/1.6.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libobjcryst, load one of these modules using a module load command like:

          module load libobjcryst/2021.1.2-intel-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libogg, load one of these modules using a module load command like:

          module load libogg/1.3.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libopus, load one of these modules using a module load command like:

          module load libopus/1.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpciaccess, load one of these modules using a module load command like:

          module load libpciaccess/0.17-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpng, load one of these modules using a module load command like:

          module load libpng/1.6.40-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libpsl, load one of these modules using a module load command like:

          module load libpsl/0.21.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libreadline, load one of these modules using a module load command like:

          module load libreadline/8.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librosa, load one of these modules using a module load command like:

          module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librsvg, load one of these modules using a module load command like:

          module load librsvg/2.51.2-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using librttopo, load one of these modules using a module load command like:

          module load librttopo/1.1.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsigc++, load one of these modules using a module load command like:

          module load libsigc++/2.10.8-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsndfile, load one of these modules using a module load command like:

          module load libsndfile/1.2.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libsodium, load one of these modules using a module load command like:

          module load libsodium/1.0.18-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libspatialindex, load one of these modules using a module load command like:

          module load libspatialindex/1.9.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libspatialite, load one of these modules using a module load command like:

          module load libspatialite/5.0.1-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtasn1, load one of these modules using a module load command like:

          module load libtasn1/4.18.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtirpc, load one of these modules using a module load command like:

          module load libtirpc/1.3.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libtool, load one of these modules using a module load command like:

          module load libtool/2.4.7-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libunistring, load one of these modules using a module load command like:

          module load libunistring/1.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libunwind, load one of these modules using a module load command like:

          module load libunwind/1.6.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvdwxc, load one of these modules using a module load command like:

          module load libvdwxc/0.4.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvorbis, load one of these modules using a module load command like:

          module load libvorbis/1.3.7-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libvori, load one of these modules using a module load command like:

          module load libvori/220621-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libwebp, load one of these modules using a module load command like:

          module load libwebp/1.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libwpe, load one of these modules using a module load command like:

          module load libwpe/1.13.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxc, load one of these modules using a module load command like:

          module load libxc/6.2.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxml++, load one of these modules using a module load command like:

          module load libxml++/2.42.1-GCC-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxml2, load one of these modules using a module load command like:

          module load libxml2/2.11.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxslt, load one of these modules using a module load command like:

          module load libxslt/1.1.38-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libxsmm, load one of these modules using a module load command like:

          module load libxsmm/1.17-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libyaml, load one of these modules using a module load command like:

          module load libyaml/0.2.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using libzip, load one of these modules using a module load command like:

          module load libzip/1.7.3-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lifelines, load one of these modules using a module load command like:

          module load lifelines/0.27.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

          The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using likwid, load one of these modules using a module load command like:

          module load likwid/5.0.1-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lmoments3, load one of these modules using a module load command like:

          module load lmoments3/1.0.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

          The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using longread_umi, load one of these modules using a module load command like:

          module load longread_umi/0.3.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using loomR, load one of these modules using a module load command like:

          module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using loompy, load one of these modules using a module load command like:

          module load loompy/3.0.7-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using louvain, load one of these modules using a module load command like:

          module load louvain/0.8.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lpsolve, load one of these modules using a module load command like:

          module load lpsolve/5.5.2.11-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lxml, load one of these modules using a module load command like:

          module load lxml/4.9.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using lz4, load one of these modules using a module load command like:

          module load lz4/1.9.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maeparser, load one of these modules using a module load command like:

          module load maeparser/1.3.0-iimpi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

          The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using magma, load one of these modules using a module load command like:

          module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mahotas, load one of these modules using a module load command like:

          module load mahotas/1.4.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

          The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using make, load one of these modules using a module load command like:

          module load make/4.4.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

          The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using makedepend, load one of these modules using a module load command like:

          module load makedepend/1.0.6-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using makeinfo, load one of these modules using a module load command like:

          module load makeinfo/7.0.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using manta, load one of these modules using a module load command like:

          module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mapDamage, load one of these modules using a module load command like:

          module load mapDamage/2.2.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using matplotlib, load one of these modules using a module load command like:

          module load matplotlib/3.7.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maturin, load one of these modules using a module load command like:

          module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mauveAligner, load one of these modules using a module load command like:

          module load mauveAligner/4736-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

          The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using maze, load one of these modules using a module load command like:

          module load maze/20170124-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mcu, load one of these modules using a module load command like:

          module load mcu/2021-04-06-gomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using medImgProc, load one of these modules using a module load command like:

          module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

          The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using medaka, load one of these modules using a module load command like:

          module load medaka/1.11.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meshalyzer, load one of these modules using a module load command like:

          module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meshtool, load one of these modules using a module load command like:

          module load meshtool/16-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using meson-python, load one of these modules using a module load command like:

          module load meson-python/0.15.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using metaWRAP, load one of these modules using a module load command like:

          module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using metaerg, load one of these modules using a module load command like:

          module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using methylpy, load one of these modules using a module load command like:

          module load methylpy/1.2.9-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mgen, load one of these modules using a module load command like:

          module load mgen/1.2.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mgltools, load one of these modules using a module load command like:

          module load mgltools/1.5.7\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mhcnuggets, load one of these modules using a module load command like:

          module load mhcnuggets/2.3-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using microctools, load one of these modules using a module load command like:

          module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minibar, load one of these modules using a module load command like:

          module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minimap2, load one of these modules using a module load command like:

          module load minimap2/2.26-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using minizip, load one of these modules using a module load command like:

          module load minizip/1.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

          The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using misha, load one of these modules using a module load command like:

          module load misha/4.0.10-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mkl-service, load one of these modules using a module load command like:

          module load mkl-service/2.3.0-intel-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mm-common, load one of these modules using a module load command like:

          module load mm-common/1.0.4-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

          The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using molmod, load one of these modules using a module load command like:

          module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mongolite, load one of these modules using a module load command like:

          module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using monitor, load one of these modules using a module load command like:

          module load monitor/1.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mosdepth, load one of these modules using a module load command like:

          module load mosdepth/0.3.3-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

          The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using motionSegmentation, load one of these modules using a module load command like:

          module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mpath, load one of these modules using a module load command like:

          module load mpath/1.1.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mpi4py, load one of these modules using a module load command like:

          module load mpi4py/3.1.4-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mrcfile, load one of these modules using a module load command like:

          module load mrcfile/1.3.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

          The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using muParser, load one of these modules using a module load command like:

          module load muParser/2.3.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mujoco-py, load one of these modules using a module load command like:

          module load mujoco-py/2.3.7-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

          The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using multichoose, load one of these modules using a module load command like:

          module load multichoose/1.0.3-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mygene, load one of these modules using a module load command like:

          module load mygene/3.2.2-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using mysqlclient, load one of these modules using a module load command like:

          module load mysqlclient/2.1.1-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

          The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using n2v, load one of these modules using a module load command like:

          module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanocompore, load one of these modules using a module load command like:

          module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanofilt, load one of these modules using a module load command like:

          module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanoget, load one of these modules using a module load command like:

          module load nanoget/1.18.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanomath, load one of these modules using a module load command like:

          module load nanomath/1.3.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nanopolish, load one of these modules using a module load command like:

          module load nanopolish/0.14.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

          The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using napari, load one of these modules using a module load command like:

          module load napari/0.4.18-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncbi-vdb, load one of these modules using a module load command like:

          module load ncbi-vdb/3.0.2-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncdf4, load one of these modules using a module load command like:

          module load ncdf4/1.17-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncolor, load one of these modules using a module load command like:

          module load ncolor/1.2.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncurses, load one of these modules using a module load command like:

          module load ncurses/6.4-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ncview, load one of these modules using a module load command like:

          module load ncview/2.1.7-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF-C++4, load one of these modules using a module load command like:

          module load netCDF-C++4/4.3.1-iimpi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF-Fortran, load one of these modules using a module load command like:

          module load netCDF-Fortran/4.6.0-iimpi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netCDF, load one of these modules using a module load command like:

          module load netCDF/4.9.2-gompi-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using netcdf4-python, load one of these modules using a module load command like:

          module load netcdf4-python/1.6.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nettle, load one of these modules using a module load command like:

          module load nettle/3.9.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using networkx, load one of these modules using a module load command like:

          module load networkx/3.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nghttp2, load one of these modules using a module load command like:

          module load nghttp2/1.48.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nghttp3, load one of these modules using a module load command like:

          module load nghttp3/0.6.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nglview, load one of these modules using a module load command like:

          module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ngtcp2, load one of these modules using a module load command like:

          module load ngtcp2/0.7.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nichenetr, load one of these modules using a module load command like:

          module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nlohmann_json, load one of these modules using a module load command like:

          module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nnU-Net, load one of these modules using a module load command like:

          module load nnU-Net/1.7.0-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nodejs, load one of these modules using a module load command like:

          module load nodejs/18.17.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

          The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using noise, load one of these modules using a module load command like:

          module load noise/1.2.2-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nsync, load one of these modules using a module load command like:

          module load nsync/1.26.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ntCard, load one of these modules using a module load command like:

          module load ntCard/1.2.2-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

          The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using num2words, load one of these modules using a module load command like:

          module load num2words/0.5.10-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numactl, load one of these modules using a module load command like:

          module load numactl/2.0.16-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numba, load one of these modules using a module load command like:

          module load numba/0.58.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using numexpr, load one of these modules using a module load command like:

          module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using nvtop, load one of these modules using a module load command like:

          module load nvtop/1.2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using olaFlow, load one of these modules using a module load command like:

          module load olaFlow/20210820-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

          The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using olego, load one of these modules using a module load command like:

          module load olego/1.1.9-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using onedrive, load one of these modules using a module load command like:

          module load onedrive/2.4.21-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ont-fast5-api, load one of these modules using a module load command like:

          module load ont-fast5-api/4.1.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openCARP, load one of these modules using a module load command like:

          module load openCARP/6.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openkim-models, load one of these modules using a module load command like:

          module load openkim-models/20190725-intel-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openpyxl, load one of these modules using a module load command like:

          module load openpyxl/3.1.2-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using openslide-python, load one of these modules using a module load command like:

          module load openslide-python/1.2.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using orca, load one of these modules using a module load command like:

          module load orca/1.3.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p11-kit, load one of these modules using a module load command like:

          module load p11-kit/0.24.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p4est, load one of these modules using a module load command like:

          module load p4est/2.8-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using p7zip, load one of these modules using a module load command like:

          module load p7zip/17.03-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pIRS, load one of these modules using a module load command like:

          module load pIRS/2.0.2-gompi-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

          The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using packmol, load one of these modules using a module load command like:

          module load packmol/v20.2.2-iccifort-2020.1.217\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pagmo, load one of these modules using a module load command like:

          module load pagmo/2.17.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pairtools, load one of these modules using a module load command like:

          module load pairtools/0.3.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using panaroo, load one of these modules using a module load command like:

          module load panaroo/1.2.8-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pandas, load one of these modules using a module load command like:

          module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parallel-fastq-dump, load one of these modules using a module load command like:

          module load parallel-fastq-dump/0.6.7-gompi-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parallel, load one of these modules using a module load command like:

          module load parallel/20230722-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

          The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using parasail, load one of these modules using a module load command like:

          module load parasail/2.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using patchelf, load one of these modules using a module load command like:

          module load patchelf/0.18.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pauvre, load one of these modules using a module load command like:

          module load pauvre/0.1924-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pblat, load one of these modules using a module load command like:

          module load pblat/2.5.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pdsh, load one of these modules using a module load command like:

          module load pdsh/2.34-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

          The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using peakdetect, load one of these modules using a module load command like:

          module load peakdetect/1.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using petsc4py, load one of these modules using a module load command like:

          module load petsc4py/3.17.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pftoolsV3, load one of these modules using a module load command like:

          module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phonemizer, load one of these modules using a module load command like:

          module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phonopy, load one of these modules using a module load command like:

          module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phototonic, load one of these modules using a module load command like:

          module load phototonic/2.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

          The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using phyluce, load one of these modules using a module load command like:

          module load phyluce/1.7.3-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using picard, load one of these modules using a module load command like:

          module load picard/2.25.1-Java-11\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pigz, load one of these modules using a module load command like:

          module load pigz/2.8-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pixman, load one of these modules using a module load command like:

          module load pixman/0.42.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkg-config, load one of these modules using a module load command like:

          module load pkg-config/0.29.2-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkgconf, load one of these modules using a module load command like:

          module load pkgconf/2.0.3-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pkgconfig, load one of these modules using a module load command like:

          module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plot1cell, load one of these modules using a module load command like:

          module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plotly-orca, load one of these modules using a module load command like:

          module load plotly-orca/1.3.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using plotly.py, load one of these modules using a module load command like:

          module load plotly.py/5.16.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pocl, load one of these modules using a module load command like:

          module load pocl/4.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pod5-file-format, load one of these modules using a module load command like:

          module load pod5-file-format/0.1.8-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

          The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using poetry, load one of these modules using a module load command like:

          module load poetry/1.7.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

          The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using polars, load one of these modules using a module load command like:

          module load polars/0.15.6-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using poppler, load one of these modules using a module load command like:

          module load poppler/23.09.0-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using popscle, load one of these modules using a module load command like:

          module load popscle/0.1-beta-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using porefoam, load one of these modules using a module load command like:

          module load porefoam/2021-09-21-foss-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

          The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using powerlaw, load one of these modules using a module load command like:

          module load powerlaw/1.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pplacer, load one of these modules using a module load command like:

          module load pplacer/1.1.alpha19\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using preseq, load one of these modules using a module load command like:

          module load preseq/3.2.0-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using presto, load one of these modules using a module load command like:

          module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pretty-yaml, load one of these modules using a module load command like:

          module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using prodigal, load one of these modules using a module load command like:

          module load prodigal/2.6.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

          The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using prokka, load one of these modules using a module load command like:

          module load prokka/1.14.5-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using protobuf-python, load one of these modules using a module load command like:

          module load protobuf-python/4.24.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using protobuf, load one of these modules using a module load command like:

          module load protobuf/24.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

          The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using psutil, load one of these modules using a module load command like:

          module load psutil/5.9.5-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using psycopg2, load one of these modules using a module load command like:

          module load psycopg2/2.9.6-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pugixml, load one of these modules using a module load command like:

          module load pugixml/1.12.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pullseq, load one of these modules using a module load command like:

          module load pullseq/1.0.2-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

          The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using purge_dups, load one of these modules using a module load command like:

          module load purge_dups/1.2.5-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pv, load one of these modules using a module load command like:

          module load pv/1.7.24-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using py-cpuinfo, load one of these modules using a module load command like:

          module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

          The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using py3Dmol, load one of these modules using a module load command like:

          module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyBigWig, load one of these modules using a module load command like:

          module load pyBigWig/0.3.18-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyEGA3, load one of these modules using a module load command like:

          module load pyEGA3/5.0.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyGenomeTracks, load one of these modules using a module load command like:

          module load pyGenomeTracks/3.8-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pySCENIC, load one of these modules using a module load command like:

          module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyWannier90, load one of these modules using a module load command like:

          module load pyWannier90/2021-12-07-gomkl-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pybedtools, load one of these modules using a module load command like:

          module load pybedtools/0.9.0-GCC-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pybind11, load one of these modules using a module load command like:

          module load pybind11/2.11.1-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pycocotools, load one of these modules using a module load command like:

          module load pycocotools/2.0.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pycodestyle, load one of these modules using a module load command like:

          module load pycodestyle/2.11.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydantic, load one of these modules using a module load command like:

          module load pydantic/2.5.3-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydicom, load one of these modules using a module load command like:

          module load pydicom/2.3.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pydot, load one of these modules using a module load command like:

          module load pydot/1.4.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyfaidx, load one of these modules using a module load command like:

          module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyfasta, load one of these modules using a module load command like:

          module load pyfasta/0.5.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pygmo, load one of these modules using a module load command like:

          module load pygmo/2.16.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pygraphviz, load one of these modules using a module load command like:

          module load pygraphviz/1.11-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyiron, load one of these modules using a module load command like:

          module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymatgen, load one of these modules using a module load command like:

          module load pymatgen/2022.9.21-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymbar, load one of these modules using a module load command like:

          module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pymca, load one of these modules using a module load command like:

          module load pymca/5.6.3-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyobjcryst, load one of these modules using a module load command like:

          module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyodbc, load one of these modules using a module load command like:

          module load pyodbc/4.0.39-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyparsing, load one of these modules using a module load command like:

          module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyproj, load one of these modules using a module load command like:

          module load pyproj/3.6.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyro-api, load one of these modules using a module load command like:

          module load pyro-api/0.1.2-fosscuda-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyro-ppl, load one of these modules using a module load command like:

          module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pysamstats, load one of these modules using a module load command like:

          module load pysamstats/1.1.2-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pysndfx, load one of these modules using a module load command like:

          module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pyspoa, load one of these modules using a module load command like:

          module load pyspoa/0.0.9-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-flakefinder, load one of these modules using a module load command like:

          module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-rerunfailures, load one of these modules using a module load command like:

          module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-shard, load one of these modules using a module load command like:

          module load pytest-shard/0.1.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest-xdist, load one of these modules using a module load command like:

          module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pytest, load one of these modules using a module load command like:

          module load pytest/7.4.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pythermalcomfort, load one of these modules using a module load command like:

          module load pythermalcomfort/2.8.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-Levenshtein, load one of these modules using a module load command like:

          module load python-Levenshtein/0.12.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-igraph, load one of these modules using a module load command like:

          module load python-igraph/0.11.4-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-irodsclient, load one of these modules using a module load command like:

          module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-isal, load one of these modules using a module load command like:

          module load python-isal/1.1.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-louvain, load one of these modules using a module load command like:

          module load python-louvain/0.16-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-parasail, load one of these modules using a module load command like:

          module load python-parasail/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-telegram-bot, load one of these modules using a module load command like:

          module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

          The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using python-weka-wrapper3, load one of these modules using a module load command like:

          module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

          The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using pythran, load one of these modules using a module load command like:

          module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

          The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using qcat, load one of these modules using a module load command like:

          module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using qnorm, load one of these modules using a module load command like:

          module load qnorm/0.8.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rMATS-turbo, load one of these modules using a module load command like:

          module load rMATS-turbo/4.1.1-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

          The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using radian, load one of these modules using a module load command like:

          module load radian/0.6.9-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rasterio, load one of these modules using a module load command like:

          module load rasterio/1.3.8-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rasterstats, load one of these modules using a module load command like:

          module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rclone, load one of these modules using a module load command like:

          module load rclone/1.65.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

          The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using re2c, load one of these modules using a module load command like:

          module load re2c/3.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using redis-py, load one of these modules using a module load command like:

          module load redis-py/4.5.1-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

          The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using regionmask, load one of these modules using a module load command like:

          module load regionmask/0.10.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

          The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using request, load one of these modules using a module load command like:

          module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rethinking, load one of these modules using a module load command like:

          module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rgdal, load one of these modules using a module load command like:

          module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rgeos, load one of these modules using a module load command like:

          module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rickflow, load one of these modules using a module load command like:

          module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rioxarray, load one of these modules using a module load command like:

          module load rioxarray/0.11.1-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rjags, load one of these modules using a module load command like:

          module load rjags/4-13-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rmarkdown, load one of these modules using a module load command like:

          module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rpy2, load one of these modules using a module load command like:

          module load rpy2/3.5.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rstanarm, load one of these modules using a module load command like:

          module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using rstudio, load one of these modules using a module load command like:

          module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ruamel.yaml, load one of these modules using a module load command like:

          module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

          The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using ruffus, load one of these modules using a module load command like:

          module load ruffus/2.8.4-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using s3fs, load one of these modules using a module load command like:

          module load s3fs/2023.12.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

          The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using samblaster, load one of these modules using a module load command like:

          module load samblaster/0.1.26-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

          The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using samclip, load one of these modules using a module load command like:

          module load samclip/0.4.0-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sansa, load one of these modules using a module load command like:

          module load sansa/0.0.7-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sbt, load one of these modules using a module load command like:

          module load sbt/1.3.13-Java-1.8\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scArches, load one of these modules using a module load command like:

          module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scCODA, load one of these modules using a module load command like:

          module load scCODA/0.1.9-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scGeneFit, load one of these modules using a module load command like:

          module load scGeneFit/1.0.2-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scHiCExplorer, load one of these modules using a module load command like:

          module load scHiCExplorer/7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scPred, load one of these modules using a module load command like:

          module load scPred/1.9.2-foss-2021b-R-4.1.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scVelo, load one of these modules using a module load command like:

          module load scVelo/0.2.5-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scanpy, load one of these modules using a module load command like:

          module load scanpy/1.9.8-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sceasy, load one of these modules using a module load command like:

          module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scib-metrics, load one of these modules using a module load command like:

          module load scib-metrics/0.3.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scib, load one of these modules using a module load command like:

          module load scib/1.1.3-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-bio, load one of these modules using a module load command like:

          module load scikit-bio/0.5.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-build, load one of these modules using a module load command like:

          module load scikit-build/0.17.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-extremes, load one of these modules using a module load command like:

          module load scikit-extremes/2022.4.10-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-image, load one of these modules using a module load command like:

          module load scikit-image/0.19.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-learn, load one of these modules using a module load command like:

          module load scikit-learn/1.4.0-gfbf-2023b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-misc, load one of these modules using a module load command like:

          module load scikit-misc/0.1.4-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scikit-optimize, load one of these modules using a module load command like:

          module load scikit-optimize/0.9.0-foss-2021a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scipy, load one of these modules using a module load command like:

          module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scrublet, load one of these modules using a module load command like:

          module load scrublet/0.2.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using scvi-tools, load one of these modules using a module load command like:

          module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using segemehl, load one of these modules using a module load command like:

          module load segemehl/0.3.4-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

          The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using segmentation-models, load one of these modules using a module load command like:

          module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

          The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using semla, load one of these modules using a module load command like:

          module load semla/1.1.6-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using seqtk, load one of these modules using a module load command like:

          module load seqtk/1.4-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

          The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using setuptools-rust, load one of these modules using a module load command like:

          module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using setuptools, load one of these modules using a module load command like:

          module load setuptools/64.0.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sf, load one of these modules using a module load command like:

          module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

          The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using shovill, load one of these modules using a module load command like:

          module load shovill/1.1.0-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

          The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using silhouetteRank, load one of these modules using a module load command like:

          module load silhouetteRank/1.0.5.13-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

          The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using silx, load one of these modules using a module load command like:

          module load silx/0.14.0-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slepc4py, load one of these modules using a module load command like:

          module load slepc4py/3.17.2-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slow5tools, load one of these modules using a module load command like:

          module load slow5tools/0.4.0-gompi-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using slurm-drmaa, load one of these modules using a module load command like:

          module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smfishHmrf, load one of these modules using a module load command like:

          module load smfishHmrf/1.3.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smithwaterman, load one of these modules using a module load command like:

          module load smithwaterman/20160702-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using smooth-topk, load one of these modules using a module load command like:

          module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snakemake, load one of these modules using a module load command like:

          module load snakemake/8.4.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snappy, load one of these modules using a module load command like:

          module load snappy/1.1.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snippy, load one of these modules using a module load command like:

          module load snippy/4.6.0-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snp-sites, load one of these modules using a module load command like:

          module load snp-sites/2.5.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using snpEff, load one of these modules using a module load command like:

          module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using solo, load one of these modules using a module load command like:

          module load solo/1.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sonic, load one of these modules using a module load command like:

          module load sonic/20180202-gompi-2020a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spaCy, load one of these modules using a module load command like:

          module load spaCy/3.4.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spaln, load one of these modules using a module load command like:

          module load spaln/2.4.13f-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sparse-neighbors-search, load one of these modules using a module load command like:

          module load sparse-neighbors-search/0.7-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sparsehash, load one of these modules using a module load command like:

          module load sparsehash/2.0.4-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spatialreg, load one of these modules using a module load command like:

          module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

          The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using speech_tools, load one of these modules using a module load command like:

          module load speech_tools/2.5.0-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spglib-python, load one of these modules using a module load command like:

          module load spglib-python/2.0.0-intel-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

          The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using spoa, load one of these modules using a module load command like:

          module load spoa/4.0.7-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

          The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using stardist, load one of these modules using a module load command like:

          module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

          The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using stars, load one of these modules using a module load command like:

          module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

          The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using statsmodels, load one of these modules using a module load command like:

          module load statsmodels/0.14.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

          The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using suave, load one of these modules using a module load command like:

          module load suave/20160529-foss-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

          The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using supernova, load one of these modules using a module load command like:

          module load supernova/2.0.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

          The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using swissknife, load one of these modules using a module load command like:

          module load swissknife/1.80-GCCcore-8.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

          The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using sympy, load one of these modules using a module load command like:

          module load sympy/1.12-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

          The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using synapseclient, load one of these modules using a module load command like:

          module load synapseclient/3.0.0-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

          The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using synthcity, load one of these modules using a module load command like:

          module load synthcity/0.2.4-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tMAE, load one of these modules using a module load command like:

          module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tabixpp, load one of these modules using a module load command like:

          module load tabixpp/1.1.2-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

          The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using task-spooler, load one of these modules using a module load command like:

          module load task-spooler/1.0.2-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

          The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using taxator-tk, load one of these modules using a module load command like:

          module load taxator-tk/1.3.3-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tbb, load one of these modules using a module load command like:

          module load tbb/2021.5.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tbl2asn, load one of these modules using a module load command like:

          module load tbl2asn/20220427-linux64\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tcsh, load one of these modules using a module load command like:

          module load tcsh/6.24.10-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorboard, load one of these modules using a module load command like:

          module load tensorboard/2.10.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorboardX, load one of these modules using a module load command like:

          module load tensorboardX/2.6.2.2-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tensorflow-probability, load one of these modules using a module load command like:

          module load tensorflow-probability/0.19.0-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using texinfo, load one of these modules using a module load command like:

          module load texinfo/6.7-GCCcore-9.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

          The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using texlive, load one of these modules using a module load command like:

          module load texlive/20230313-GCC-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tidymodels, load one of these modules using a module load command like:

          module load tidymodels/1.1.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

          The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using time, load one of these modules using a module load command like:

          module load time/1.9-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using timm, load one of these modules using a module load command like:

          module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tmux, load one of these modules using a module load command like:

          module load tmux/3.2a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tokenizers, load one of these modules using a module load command like:

          module load tokenizers/0.13.3-GCCcore-12.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchaudio, load one of these modules using a module load command like:

          module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchtext, load one of these modules using a module load command like:

          module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchvf, load one of these modules using a module load command like:

          module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

          The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using torchvision, load one of these modules using a module load command like:

          module load torchvision/0.14.1-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tornado, load one of these modules using a module load command like:

          module load tornado/6.3.2-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tqdm, load one of these modules using a module load command like:

          module load tqdm/4.66.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

          The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using treatSens, load one of these modules using a module load command like:

          module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

          The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using trimAl, load one of these modules using a module load command like:

          module load trimAl/1.4.1-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

          The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using tsne, load one of these modules using a module load command like:

          module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

          The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using typing-extensions, load one of these modules using a module load command like:

          module load typing-extensions/4.9.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

          The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using umap-learn, load one of these modules using a module load command like:

          module load umap-learn/0.5.5-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

          The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using umi4cPackage, load one of these modules using a module load command like:

          module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

          The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using uncertainties, load one of these modules using a module load command like:

          module load uncertainties/3.1.7-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

          The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using uncertainty-calibration, load one of these modules using a module load command like:

          module load uncertainty-calibration/0.0.9-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

          The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using unimap, load one of these modules using a module load command like:

          module load unimap/0.1-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

          The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using unixODBC, load one of these modules using a module load command like:

          module load unixODBC/2.3.11-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

          The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using utf8proc, load one of these modules using a module load command like:

          module load utf8proc/2.8.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

          The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using util-linux, load one of these modules using a module load command like:

          module load util-linux/2.39-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vConTACT2, load one of these modules using a module load command like:

          module load vConTACT2/0.11.3-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vaeda, load one of these modules using a module load command like:

          module load vaeda/0.0.30-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vbz_compression, load one of these modules using a module load command like:

          module load vbz_compression/1.0.1-gompi-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vcflib, load one of these modules using a module load command like:

          module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using velocyto, load one of these modules using a module load command like:

          module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

          The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using virtualenv, load one of these modules using a module load command like:

          module load virtualenv/20.24.6-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vispr, load one of these modules using a module load command like:

          module load vispr/0.4.14-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vitessce-python, load one of these modules using a module load command like:

          module load vitessce-python/20230222-foss-2022a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vitessceR, load one of these modules using a module load command like:

          module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vsc-mympirun, load one of these modules using a module load command like:

          module load vsc-mympirun/5.3.1\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using vt, load one of these modules using a module load command like:

          module load vt/0.57721-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wandb, load one of these modules using a module load command like:

          module load wandb/0.13.6-GCC-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

          The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using waves2Foam, load one of these modules using a module load command like:

          module load waves2Foam/20200703-foss-2019b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wget, load one of these modules using a module load command like:

          module load wget/1.21.1-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wgsim, load one of these modules using a module load command like:

          module load wgsim/20111017-GCC-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

          The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using worker, load one of these modules using a module load command like:

          module load worker/1.6.13-iimpi-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wpebackend-fdo, load one of these modules using a module load command like:

          module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wrapt, load one of these modules using a module load command like:

          module load wrapt/1.15.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wrf-python, load one of these modules using a module load command like:

          module load wrf-python/1.3.4.1-foss-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wtdbg2, load one of these modules using a module load command like:

          module load wtdbg2/2.5-GCCcore-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wxPython, load one of these modules using a module load command like:

          module load wxPython/4.2.0-foss-2021b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

          The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using wxWidgets, load one of these modules using a module load command like:

          module load wxWidgets/3.2.0-GCC-11.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

          The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using x264, load one of these modules using a module load command like:

          module load x264/20230226-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

          The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using x265, load one of these modules using a module load command like:

          module load x265/3.5-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xESMF, load one of these modules using a module load command like:

          module load xESMF/0.3.0-intel-2020b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xarray, load one of these modules using a module load command like:

          module load xarray/2023.9.0-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xorg-macros, load one of these modules using a module load command like:

          module load xorg-macros/1.20.0-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xprop, load one of these modules using a module load command like:

          module load xprop/1.2.5-GCCcore-10.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xproto, load one of these modules using a module load command like:

          module load xproto/7.0.31-GCCcore-10.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xtb, load one of these modules using a module load command like:

          module load xtb/6.6.1-gfbf-2023a\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using xxd, load one of these modules using a module load command like:

          module load xxd/9.0.2112-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

          The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using yaff, load one of these modules using a module load command like:

          module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using yaml-cpp, load one of these modules using a module load command like:

          module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zUMIs, load one of these modules using a module load command like:

          module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zarr, load one of these modules using a module load command like:

          module load zarr/2.16.0-foss-2022b\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zfp, load one of these modules using a module load command like:

          module load zfp/1.0.0-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zlib-ng, load one of these modules using a module load command like:

          module load zlib-ng/2.0.7-GCCcore-11.3.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zlib, load one of these modules using a module load command like:

          module load zlib/1.2.13-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

          The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

          To start using zstd, load one of these modules using a module load command like:

          module load zstd/1.5.5-GCCcore-13.2.0\n

          (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

          accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

          Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

          "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
          • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

          • Includes researchers from all VSC partners.

          • Usage is free of charge.

          • Use your account credentials at your affiliated university to request a VSC-id and connect.

          • See Getting a HPC Account.

          "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
          • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

          • HPC-UGent promotes using the Tier1 services of the VSC.

          • HPC-UGent can act as a liason.

          "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
          • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

          • Same conditions apply, free of charge for all Flemish university associations.

          • Use your university account credentials to request a VSC-id and connect.

          "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

          Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

          "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
          • VSC has a dedicated service geared towards industry.

          • HPC-UGent can act as a liason to the VSC services.

          "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
          • Interested in collaborating in supercomputing with a UGent research group?

          • We can help you look for a collaborative partner. Contact hpc@ugent.be.

          "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
          $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

          $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

          As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

          "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
          "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

          (more info soon)

          "}]} \ No newline at end of file diff --git a/HPC/Gent/Linux/sitemap.xml.gz b/HPC/Gent/Linux/sitemap.xml.gz index 87200a63eff..499a5f2e800 100644 Binary files a/HPC/Gent/Linux/sitemap.xml.gz and b/HPC/Gent/Linux/sitemap.xml.gz differ diff --git a/HPC/Gent/Linux/useful_linux_commands/index.html b/HPC/Gent/Linux/useful_linux_commands/index.html index 63a59037a62..c5339f4e63f 100644 --- a/HPC/Gent/Linux/useful_linux_commands/index.html +++ b/HPC/Gent/Linux/useful_linux_commands/index.html @@ -1496,7 +1496,7 @@

          How to get started with shell scr
          nano foo
           

          or use the following commands:

          -
          echo "echo Hello! This is my hostname:" > foo
          +
          echo "echo 'Hello! This is my hostname:'" > foo
           echo hostname >> foo
           

          The easiest ways to run a script is by starting the interpreter and pass @@ -1521,7 +1521,9 @@

          How to get started with shell scr /bin/bash

          We edit our script and change it with this information:

          -
          #!/bin/bash echo \"Hello! This is my hostname:\" hostname
          +
          #!/bin/bash
          +echo "Hello! This is my hostname:"
          +hostname
           

          Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Gent/Windows/search/search_index.json b/HPC/Gent/Windows/search/search_index.json index 342b95a53b1..2a845351a98 100644 --- a/HPC/Gent/Windows/search/search_index.json +++ b/HPC/Gent/Windows/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

          Use the menu on the left to navigate, or use the search box on the top right.

          You are viewing documentation intended for people using Windows.

          Use the OS dropdown in the top bar to switch to a different operating system.

          Quick links

          • Getting Started | Getting Access
          • Recording of HPC-UGent intro
          • Linux Tutorial
          • Hardware overview
          • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
          • FAQ | Troubleshooting | Best practices | Known issues

          If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

          If you still have any questions, you can contact the HPC-UGent team.

          "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

          New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

          If you want to use software that's not yet installed on the HPC, send us a software installation request.

          Overview of HPC-UGent Tier-2 infrastructure

          "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

          An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

          See also: Running batch jobs.

          "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

          When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

          Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

          Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

          If the package or library you want is not available, send us a software installation request.

          "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

          Modules each come with a suffix that describes the toolchain used to install them.

          Examples:

          • AlphaFold/2.2.2-foss-2021a

          • tqdm/4.61.2-GCCcore-10.3.0

          • Python/3.9.5-GCCcore-10.3.0

          • matplotlib/3.4.2-foss-2021a

          Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

          The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          You can use module avail [search_text] to see which versions on which toolchains are available to use.

          If you need something that's not available yet, you can request it through a software installation request.

          It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

          "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

          When incompatible modules are loaded, you might encounter an error like this:

          Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

          You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

          Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

          An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

          See also: How do I choose the job modules?

          "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

          The 72 hour walltime limit will not be extended. However, you can work around this barrier:

          • Check that all available resources are being used. See also:
            • How many cores/nodes should I request?.
            • My job is slow.
            • My job isn't using any GPUs.
          • Use a faster cluster.
          • Divide the job into more parallel processes.
          • Divide the job into shorter processes, which you can submit as separate jobs.
          • Use the built-in checkpointing of your software.
          "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

          Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

          When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

          Try requesting a bit more memory than your proportional share, and see if that solves the issue.

          See also: Specifying memory requirements.

          "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

          When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

          It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

          See also: Running interactive jobs.

          "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

          Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

          Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

          See also: HPC-UGent GPU clusters.

          "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

          There are a few possible causes why a job can perform worse than expected.

          Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

          Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

          Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

          "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

          Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

          To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

          See also: Multi core jobs/Parallel Computing and Mympirun.

          "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

          For example, we have a simple script (./hello.sh):

          #!/bin/bash \necho \"hello world\"\n

          And we run it like mympirun ./hello.sh --output output.txt.

          To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

          mympirun --output output.txt ./hello.sh\n
          "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

          See the explanation about how jobs get prioritized in When will my job start.

          "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

          When trying to create files, errors like this can occur:

          No space left on device\n

          The error \"No space left on device\" can mean two different things:

          • all available storage quota on the file system in question has been used;
          • the inode limit has been reached on that file system.

          An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

          Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

          If the problem persists, feel free to contact support.

          "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

          NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

          See https://helpdesk.ugent.be/account/en/regels.php.

          If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

          "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

          Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

          $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

          For more information about chmod or setfacl, see Linux tutorial.

          "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

          Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

          "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

          Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

          If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

          "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

          On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

          MacOS & Linux (on Windows, only the second part is shown):

          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

          Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

          "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

          A Virtual Organisation consists of a number of members and moderators. A moderator can:

          • Manage the VO members (but can't access/remove their data on the system).

          • See how much storage each member has used, and set limits per member.

          • Request additional storage for the VO.

          One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

          See also: Virtual Organisations.

          "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

          After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

          See also: Your UGent home drive and shares.

          "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

          Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

          du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

          The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

          The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

          "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

          By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

          You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

          "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

          When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

          sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

          A lot of tasks can be performed without sudo, including installing software in your own account.

          Installing software

          • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
          • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
          "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

          Who can I contact?

          • General questions regarding HPC-UGent and VSC: hpc@ugent.be

          • HPC-UGent Tier-2: hpc@ugent.be

          • VSC Tier-1 compute: compute@vscentrum.be

          • VSC Tier-1 cloud: cloud@vscentrum.be

          "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

          Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

          "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

          The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

          "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

          Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

          module load hod\n
          "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

          The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

          As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

          For example, this will work as expected:

          $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

          Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

          "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

          The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

          $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

          By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

          If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

          Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

          "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

          After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

          These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

          You should occasionally clean this up using hod clean:

          $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
          Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

          "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

          If you have any questions, or are experiencing problems using HOD, you have a couple of options:

          • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

          • Contact the HPC-UGent team via hpc@ugent.be

          • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

          "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

          Note

          To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

          Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

          "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

          The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

          Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

          Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

          "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

          Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

          To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

          $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

          After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

          To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

          First, we copy the magicsquare.m example that comes with MATLAB to example.m:

          cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

          To compile a MATLAB program, use mcc -mv:

          mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
          "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

          To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

          It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

          For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

          "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

          If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

          export _JAVA_OPTIONS=\"-Xmx64M\"\n

          The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

          Another possible issue is that the heap size is too small. This could result in errors like:

          Error: Out of memory\n

          A possible solution to this is by setting the maximum heap size to be bigger:

          export _JAVA_OPTIONS=\"-Xmx512M\"\n
          "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

          MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

          The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

          You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

          parpool.m
          % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

          See also the parpool documentation.

          "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

          Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

          MATLAB_LOG_DIR=<OUTPUT_DIR>\n

          where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

          # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

          You should remove the directory at the end of your job script:

          rm -rf $MATLAB_LOG_DIR\n
          "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

          When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

          The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

          export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

          So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

          "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

          All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

          jobscript.sh
          #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
          "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

          VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

          Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

          Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

          "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

          First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

          $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

          When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

          Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

          It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

          "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

          You can get a list of running VNC servers on a node with

          $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

          This only displays the running VNC servers on the login node you run the command on.

          To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

          $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

          This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

          "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

          The VNC server runs on a (in the example above, on gligar07.gastly.os).

          In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

          Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

          To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

          The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

          "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

          The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

          The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

          So, in our running example, both the source and destination ports are 5906.

          "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

          In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

          If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

          In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

          To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

          Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

          In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

          We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

          "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

          First, we will set up the SSH tunnel from our workstation to .

          Use the settings specified in the sections above:

          • source port: the port on which the VNC server is running (see Determining the source/destination port);

          • destination host: localhost;

          • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

          See for detailed information on how to configure PuTTY to set up the SSH tunnel, by entering the settings in the and fields in SSH tunnel.

          With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

          Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

          "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

          Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

          You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

          netstat -an | grep -i listen | grep tcp | grep 12345\n

          If you see no matching lines, then the port you picked is still available, and you can continue.

          If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

          $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
          "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

          In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

          To do this, run the following command:

          $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

          With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

          Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

          **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

          As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

          "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

          You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file that looks like TurboVNC64-2.1.2.exe (the version number can be different, but the 64 should be in the filename) and execute it.

          Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

          When prompted for a password, use the password you used to setup the VNC server.

          When prompted for default or empty panel, choose default.

          If you have an empty panel, you can reset your settings with the following commands:

          xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
          "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

          The VNC server can be killed by running

          vncserver -kill :6\n

          where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

          "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

          You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

          "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

          All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

          See HPC policies for more information on who is entitled to an account.

          The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

          There are two methods for connecting to HPC-UGent infrastructure:

          • Using a terminal to connect via SSH.
          • Using the web portal

          The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

          If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

          The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

          "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
          • an SSH public/private key pair can be seen as a lock and a key

          • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

          • the SSH private key is like a physical key: you don't hand it out to other people.

          • anyone who has the key (and the optional password) can unlock the door and log in to the account.

          • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

          Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). A typical Windows environment does not come with pre-installed software to connect and run command-line executables on a HPC. Some tools need to be installed on your Windows machine first, before we can start the actual work.

          "}, {"location": "account/#get-putty-a-free-telnetssh-client", "title": "Get PuTTY: A free telnet/SSH client", "text": "

          We recommend to use the PuTTY tools package, which is freely available.

          You do not need to install PuTTY, you can download the PuTTY and PuTTYgen executable and run it. This can be useful in situations where you do not have the required permissions to install software on the computer you are using. Alternatively, an installation package is also available.

          You can download PuTTY from the official address: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html. You probably want the 64-bits version. If you can install software on your computer, you can use the \"Package files\", if not, you can download and use putty.exe and puttygen.exe in the \"Alternative binary files\" section.

          The PuTTY package consists of several components, but we'll only use two:

          1. PuTTY: the Telnet and SSH client itself (to login, see Open a terminal)

          2. PuTTYgen: an RSA and DSA key generation utility (to generate a key pair, see Generate a public/private key pair)

          "}, {"location": "account/#generating-a-publicprivate-key-pair", "title": "Generating a public/private key pair", "text": "

          Before requesting a VSC account, you need to generate a pair of ssh keys. You need 2 keys, a public and a private key. You can visualise the public key as a lock to which only you have the key (your private key). You can send a copy of your lock to anyone without any problems, because only you can open it, as long as you keep your private key secure. To generate a public/private key pair, you can use the PuTTYgen key generator.

          Start PuTTYgen.exe it and follow these steps:

          1. In Parameters (at the bottom of the window), choose \"RSA\" and set the number of bits in the key to 4096.

          2. Click on Generate. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field Public key for pasting into OpenSSH authorized_keys file.

          3. Next, it is advised to fill in the Key comment field to make it easier identifiable afterwards.

          4. Next, you should specify a passphrase in the Key passphrase field and retype it in the Confirm passphrase field. Remember, the passphrase protects the private key against unauthorised use, so it is best to choose one that is not too easy to guess but that you can still remember. Using a passphrase is not required, but we recommend you to use a good passphrase unless you are certain that your computer's hard disk is encrypted with a decent password. (If you are not sure your disk is encrypted, it probably isn't.)

          5. Save both the public and private keys in a folder on your personal computer (We recommend to create and put them in the folder \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\") with the buttons Save public key and Save private key. We recommend using the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

          6. Finally, save an \"OpenSSH\" version of your private key (in particular for later \"X2Go\" usage, see x2go) by entering the \"Conversions\" menu and selecting \"Export OpenSSH key\" (do not select the \"force new file format\" variant). Save the file in the same location as in the previous step with filename \"id_rsa\". (If there is no \"Conversions\" menu, you must update your \"puttygen\" version. If you want to do this conversion afterwards, you can start with loading an existing \"id_rsa.ppk\" and only do this conversions export.)

          If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the HPC clusters.

          "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

          It is possible to setup a SSH agent in Windows. This is an optional configuration to help you to keep all your SSH keys (if you have several) stored in the same key ring to avoid to type the SSH key password each time. The SSH agent is also necessary to enable SSH hops with key forwarding from Windows.

          Pageant is the SSH authentication agent used in windows. This agent should be available from the PuTTY installation package https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html or as stand alone binary package.

          After the installation just start the Pageant application in Windows, this will start the agent in background. The agent icon will be visible from the Windows panel.

          At this point the agent does not contain any private key. You should include the private key(s) generated in the previous section Generating a public/private key pair.

          1. Click on Add key

          2. Select the private key file generated in Generating a public/private key pair (\"id_rsa.ppk\" by default).

          3. Enter the same SSH key password used to generate the key. After this step the new key will be included in Pageant to manage the SSH connections.

          4. You can see the SSH key(s) available in the key ring just clicking on View Keys.

          5. You can change PuTTY setup to use the SSH agent. Open PuTTY and check Connection > SSH > Auth > Allow agent forwarding.

          Now you can connect to the login nodes as usual. The SSH agent will know which SSH key should be used and you do not have to type the SSH passwords each time, this task is done by Pageant agent automatically.

          It is also possible to use WinSCP with Pageant, see https://winscp.net/eng/docs/ui_pageant for more details.

          "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

          Visit https://account.vscentrum.be/

          You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

          Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

          Click Confirm

          You will now be taken to the authentication page of your institute.

          You will now have to log in with CAS using your UGent account.

          You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

          After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

          This file should have been stored in the directory \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\"

          After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

          "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

          Within one day, you should receive a Welcome e-mail with your VSC account details.

          Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

          Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

          "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

          In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

          1. Create a new public/private SSH key pair from Putty. Repeat the process described in section\u00a0Generate a public/private key pair.

          2. Go to https://account.vscentrum.be/django/account/edit

          3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

          4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

          5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

          "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

          A typical Computation workflow will be:

          1. Connect to the HPC

          2. Transfer your files to the HPC

          3. Compile your code and test it

          4. Create a job script

          5. Submit your job

          6. Wait while

            1. your job gets into the queue

            2. your job gets executed

            3. your job finishes

          7. Move your results

          We'll take you through the different tasks one by one in the following chapters.

          "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

          AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

          See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

          "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

          This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

          • AlphaFold website: https://alphafold.com/
          • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
          • AlphaFold FAQ: https://alphafold.com/faq
          • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
          • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
          • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
            • recording available on YouTube
            • slides available here (PDF)
            • see also https://www.vscentrum.be/alphafold
          "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

          Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

          $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

          To use AlphaFold, you should load a particular module, for example:

          module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

          We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

          Warning

          When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

          Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

          $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

          The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

          As of writing this documentation the latest version is 20230310.

          Info

          The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

          The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

          "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

          The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

          export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

          Use newest version

          Do not forget to replace 20230310 with a more up to date version if available.

          "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

          AlphaFold provides a script called run_alphafold.py

          A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

          The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

          Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

          For more information about the script and options see this section in the official README.

          READ README

          It is strongly advised to read the official README provided by DeepMind before continuing.

          "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

          The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

          Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

          Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

          Info

          Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

          "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

          The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

          Using --db_preset=full_dbs, the following runtime data was collected:

          • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
          • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
          • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
          • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

          This highlights a couple of important attention points:

          • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
          • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
          • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

          With --db_preset=casp14, it is clearly more demanding:

          • On doduo, with 24 cores (1 node): still running after 48h...
          • On joltik, 1 V100 GPU + 8 cores: 4h 48min

          This highlights the difference between CPU and GPU performance even more.

          "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

          The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

          Do not forget to set up the environment (see above: Setting up the environment).

          "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

          Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

          >sequence_name\n<SEQUENCE>\n

          Then run the following command in the same directory:

          alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

          See AlphaFold output, for information about the outputs.

          Info

          For more scenarios see the example section in the official README.

          "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

          The following two example job scripts can be used as a starting point for running AlphaFold.

          The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

          To run the job scripts you need to create a file named T1050.fasta with the following content:

          >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
          source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

          "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

          Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

          Swap to the joltik GPU before submitting it:

          module swap cluster/joltik\n
          AlphaFold-gpu-joltik.sh
          #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
          "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

          Jobscript that runs AlphaFold on CPU using 24 cores on one node.

          AlphaFold-cpu-doduo.sh
          #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

          In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

          "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

          Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

          One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

          For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

          This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

          "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

          Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

          The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

          In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

          If these limitations are a problem for you, please let us know via hpc@ugent.be.

          "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

          All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

          "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

          Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

          Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

          # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
          "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

          For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

          We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

          "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

          Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

          Create a job script like:

          #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

          Create an example myscript.sh:

          #!/bin/bash\n\n# prime factors\nfactor 1234567\n
          "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

          We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

          Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

          cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
          #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

          You can download linear_regression.py from the official Tensorflow repository.

          "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

          It is also possible to execute MPI jobs within a container, but the following requirements apply:

          • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

          • Use modules within the container (install the environment-modules or lmod package in your container)

          • Load the required module(s) before apptainer execution.

          • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

          Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

          cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

          For example to compile an MPI example:

          module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

          Example MPI job script:

          #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
          "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
          1. Before starting, you should always check:

            • Are there any errors in the script?

            • Are the required modules loaded?

            • Is the correct executable used?

          2. Check your computer requirements upfront, and request the correct resources in your batch job script.

            • Number of requested cores

            • Amount of requested memory

            • Requested network type

          3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

          4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

          5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

          6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

          7. Submit your job and wait (be patient) ...

          8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

          9. The runtime is limited by the maximum walltime of the queues.

          10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

          11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

          12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

          "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

          All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

          Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

          "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

          In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

          "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

          To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

          In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

          In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

          Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

          Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

          Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

          "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

          Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

          All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

          A typical process looks like:

          1. Copy your software to the login-node of the HPC

          2. Start an interactive session on a compute node;

          3. Compile it;

          4. Test it locally;

          5. Generate your job scripts;

          6. Test it on the HPC

          7. Run it (in parallel);

          We assume you've copied your software to the HPC. The next step is to request your private compute node.

          $ qsub -I\nqsub: waiting for job 123456 to start\n
          "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

          Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

          We now list the directory and explore the contents of the \"hello.c\" program:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

          The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

          We first need to compile this C-file into an executable with the gcc-compiler.

          First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

          $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

          A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

          Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

          $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

          It seems to work, now run it on the HPC

          qsub hello.pbs\n

          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          List the directory and explore the contents of the \"mpihello.c\" program:

          $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

          mpihello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

          The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

          Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

          mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

          A new file \"hello\" has been created. Note that this program has \"execute\" rights.

          Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n
          "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

          We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

          cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

          We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

          module purge\nmodule load intel\n

          Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

          mpiicc -o mpihello mpihello.c\nls -l\n

          Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

          $ ./mpihello\nHello World from Node 0.\n

          It seems to work, now run it on the HPC.

          qsub mpihello.pbs\n

          Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

          Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

          Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

          Before you can really start using the HPC clusters, there are several things you need to do or know:

          1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

          2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

          3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

          4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

          "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

          Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

          VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

          All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

          • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

          • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

            • While this web connection is active new SSH sessions can be started.

            • Active SSH sessions will remain active even when this web page is closed.

          • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

          Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

          ssh_exchange_identification: read: Connection reset by peer\n
          "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

          The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

          If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

          "}, {"location": "connecting/#open-a-terminal", "title": "Open a Terminal", "text": "

          You've generated a public/private key pair with PuTTYgen and have an approved account on the VSC clusters. The next step is to setup the connection to (one of) the HPC.

          In the screenshots, we show the setup for user \"vsc20167\"

          to the HPC cluster via the login node \"login.hpc.ugent.be\".

          1. Start the PuTTY executable putty.exe in your directory C:\\Program Files (x86)\\PuTTY and the configuration screen will pop up. As you will often use the PuTTY tool, we recommend adding a shortcut on your desktop.

          2. Within the category <Session>, in the field <Host Name>, enter the name of the login node of the cluster (i.e., \"login.hpc.ugent.be\") you want to connect to.

          3. In the category Connection > Data, in the field Auto-login username, put in <vsc40000> , which is your VSC username that you have received by e-mail after your request was approved.

          4. In the category Connection > SSH > Auth, in the field Private key file for authentication click on Browse and select the private key (i.e., \"id_rsa.ppk\") that you generated and saved above.

          5. In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox.

          6. Now go back to <Session>, and fill in \"hpcugent\" in the Saved Sessions field and press Save to store the session information.

          7. Now pressing Open, will open a terminal window and asks for you passphrase.

          8. If this is your first time connecting, you will be asked to verify the authenticity of the login node. Please see section\u00a0Warning message when first connecting to new host on how to do this.

          9. After entering your correct passphrase, you will be connected to the login-node of the HPC.

          10. To check you can now \"Print the Working Directory\" (pwd) and check the name of the computer, where you have logged in (hostname):

            $ pwd\n/user/home/gent/vsc400/vsc40000\n$ hostname -f\ngligar07.gastly.os\n
          11. For future PuTTY sessions, just select your saved session (i.e. \"hpcugent\") from the list, Load it and press Open.

          Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

          $ pwd\n/user/home/gent/vsc400/vsc40000\n

          Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

          $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

          This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

          You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

          As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

          $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

          This directory contains:

          1. This HPC Tutorial (in either a Mac, Linux or Windows version).

          2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

          cd examples\n

          Tip

          Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

          Tip

          For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

          The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Go to your home directory, check your own private examples directory, ...\u00a0and start working.

          cd\nls -l\n

          Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

          Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

          You can exit the connection at anytime by entering:

          $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

          tip: Setting your Language right

          You may encounter a warning message similar to the following one during connecting:

          perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
          or any other error message complaining about the locale.

          This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

          LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

          A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

          Note

          If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

          "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

          Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back.

          "}, {"location": "connecting/#winscp", "title": "WinSCP", "text": "

          To transfer files to and from the cluster, we recommend the use of WinSCP, a graphical file management tool which can transfer files using secure protocols such as SFTP and SCP. WinSCP is freely available from http://www.winscp.net.

          To transfer your files using WinSCP,

          1. Open the program

          2. The Login menu is shown automatically (if it is closed, click New Session to open it again). Fill in the necessary fields under Session

            1. Click New Site.

            2. Enter \"login.hpc.ugent.be\" in the Host name field.

            3. Enter your \"vsc-account\" in the User name field.

            4. Select SCP as the file protocol.

            5. Note that the password field remains empty.

            1. Click Advanced....

            2. Click SSH > Authentication.

            3. Select your private key in the field Private key file.

          3. Press the Save button, to save the session under Session > Sites for future access.

          4. Finally, when clicking on Login, you will be asked for your key passphrase.

          The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

          Make sure the fingerprint in the alert matches one of the following:

          - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

          If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

          Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

          Now, try out whether you can transfer an arbitrary file from your local machine to the HPC and back.

          "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

          See the section on rsync in chapter 5 of the Linux intro manual.

          "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

          It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

          For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

          ssh gligar07.gastly.os\n
          This is also possible the other way around.

          If you want to find out which login host you are connected to, you can use the hostname command.

          $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

          Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

          • screen
          • tmux
          "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

          It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

          In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

          Check if any cron script is already set in the current login node with:

          crontab -l\n

          At this point you can add/edit (with vi editor) any cron script running the command:

          crontab -e\n
          "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
           15 5 * * * ~/runscript.sh >& ~/job.out\n

          where runscript.sh has these lines in this example:

          runscript.sh
          #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

          In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

          Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

          ssh gligar07    # or gligar08\n
          "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

          You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

          EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

          "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

          For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

          • applying custom patches to the software that only you or your group are using

          • evaluating new software versions prior to requesting a central software installation

          • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

          "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

          Before you use EasyBuild, you need to configure it:

          "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

          This is where EasyBuild can find software sources:

          EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
          • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

          • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

          "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

          This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

          export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

          On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

          "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

          This is where EasyBuild will install the software (and accompanying modules) to.

          For example, to let it use $VSC_DATA/easybuild, use:

          export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

          Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

          Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

          To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

          "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

          Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

          module load EasyBuild\n
          "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

          EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

          $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

          For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

          eb example-1.2.1-foss-2024a.eb --robot\n
          "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

          To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

          To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

          eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

          To try to install example v1.2.5 with a different compiler toolchain:

          eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
          "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

          To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

          "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

          To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

          module use $EASYBUILD_INSTALLPATH/modules/all\n

          It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

          "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

          As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

          Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

          Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

          There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

          Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

          Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

          This chapter shows you how to measure:

          1. Walltime
          2. Memory usage
          3. CPU usage
          4. Disk (storage) needs
          5. Network bottlenecks

          First, we allocate a compute node and move to our relevant directory:

          qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

          One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

          The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

          Test the time command:

          $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

          It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

          It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

          The walltime can be specified in a job scripts as:

          #PBS -l walltime=3:00:00:00\n

          or on the command line

          qsub -l walltime=3:00:00:00\n

          It is recommended to always specify the walltime for a job.

          "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

          In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

          "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

          The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

          $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

          Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

          It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

          On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

          "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

          To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

          top

          provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

          htop

          is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

          "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

          Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

          The maximum amount of physical memory used by the job per node can be specified in a job script as:

          #PBS -l mem=4gb\n

          or on the command line

          qsub -l mem=4gb\n
          "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

          Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

          "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

          The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

          The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

          $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

          Or if you want to see it in a more readable format, execute:

          $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

          Note

          Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

          In order to specify the number of nodes and the number of processors per node in your job script, use:

          #PBS -l nodes=N:ppn=M\n

          or with equivalent parameters on the command line

          qsub -l nodes=N:ppn=M\n

          This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

          You can also use this statement in your job script:

          #PBS -l nodes=N:ppn=all\n

          to request all cores of a node, or

          #PBS -l nodes=N:ppn=half\n

          to request half of them.

          Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

          This could also be monitored with the htop command:

          htop\n
          Example output:
            1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

          The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

          If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
          2. Develop your parallel program in a smart way.
          3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          4. Correct your request for CPUs in your job script.
          "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

          On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

          The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

          The load averages differ from CPU percentage in two significant ways:

          1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
          2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
          "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

          What is the \"optimal load\" rule of thumb?

          The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

          In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

          Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

          The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

          1. When you are running computational intensive applications, one application per processor will generate the optimal load.
          2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

          The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

          The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

          "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

          The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

          The uptime command will show us the average load

          $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

          Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

          $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
          You can also read it in the htop command.

          "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

          It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

          But how can you maximise?

          1. Profile your software to improve its performance.
          2. Configure your software (e.g., to exactly use the available amount of processors in a node).
          3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
          4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
          5. Correct your request for CPUs in your job script.

          And then check again.

          "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

          Some programs generate intermediate or output files, the size of which may also be a useful metric.

          Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

          It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

          Several actions can be taken, to avoid storage problems:

          1. Be aware of all the files that are generated by your program. Also check out the hidden files.
          2. Check your quota consumption regularly.
          3. Clean up your files regularly.
          4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
          5. Make sure your programs clean up their temporary files after execution.
          6. Move your output results to your own computer regularly.
          7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
          "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

          Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

          Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

          The parameter to add in your job script would be:

          #PBS -l ib\n

          If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

          #PBS -l gbe\n
          "}, {"location": "getting_started/", "title": "Getting Started", "text": "

          Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

          In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

          Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

          "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

          To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

          If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

          "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
          1. Connect to the login nodes
          2. Transfer your files to the HPC-UGent infrastructure
          3. Optional: compile your code and test it
          4. Create a job script and submit your job
          5. Wait for job to be executed
          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

          "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

          There are two options to connect

          • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
          • Using the web portal

          Considering your operating system is Windows, it is recommended to use the web portal.

          The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

          See shell access when using the web portal, or connection to the HPC-UGent infrastructure when using a terminal.

          Make sure you can get to a shell access to the HPC-UGent infrastructure before proceeding with the next steps.

          Info

          When having problems see the connection issues section on the troubleshooting page.

          "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

          Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

          Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

          The HPC-UGent web portal provides a file browser that allows uploading files. For more information see the file browser section.

          Upload both files (run.sh and tensorflow-mnist.py) to your home directory and go back to your shell.

          Info

          As an alternative, you can use WinSCP (see our section)

          When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

          $ ls ~\nrun.sh tensorflow_mnist.py\n

          When you do not see these files, make sure you uploaded the files to your home directory.

          "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

          Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

          A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

          Our job script looks like this:

          run.sh

          #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
          As you can see this job script will run the Python script named tensorflow_mnist.py.

          The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

          module swap cluster/donphan\n

          Tip

          When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

          To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

          $ qsub run.sh\n123456\n

          This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

          Make sure you understand what the module command does

          Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

          It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

          When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

          For detailed information about module commands, read the running batch jobs chapter.

          "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

          Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

          You can get an overview of the active jobs using the qstat command:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

          Eventually, after entering qstat again you should see that your job has started running:

          $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

          If you don't see your job in the output of the qstat command anymore, your job has likely completed.

          Read this section on how to interpret the output.

          "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

          When your job finishes it generates 2 output files:

          • One for normal output messages (stdout output channel).
          • One for warning and error messages (stderr output channel).

          By default located in the directory where you issued qsub.

          Info

          For more information about the stdout and stderr output channels, see this section.

          In our example when running ls in the current directory you should see 2 new files:

          • run.sh.o123456, containing normal output messages produced by job 123456;
          • run.sh.e123456, containing errors and warnings produced by job 123456.

          Info

          run.sh.e123456 should be empty (no errors or warnings).

          Use your own job ID

          Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

          When examining the contents of run.sh.o123456 you will see something like this:

          Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

          Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

          Warning

          When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

          For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

          "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
          • Running interactive jobs
          • Running jobs with input/output data
          • Multi core jobs/Parallel Computing
          • Interactive and debug cluster

          For more examples see Program examples and Job script examples

          "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

          module swap cluster/joltik\n

          To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

          module swap cluster/accelgor\n

          Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

          "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

          To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

          Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

          "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

          See https://www.ugent.be/hpc/en/infrastructure.

          "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

          There are 2 main ways to ask for GPUs as part of a job:

          • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

          • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

          Some background:

          • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

          • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

          "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

          Some important attention points:

          • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

          • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

          • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

          • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

          "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

          Use module avail to check for centrally installed software.

          The subsections below only cover a couple of installed software packages, more are available.

          "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

          Please consult module avail GROMACS for a list of installed versions.

          "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

          Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

          Please consult module avail Horovod for a list of installed versions.

          Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

          At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

          "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

          Please consult module avail PyTorch for a list of installed versions.

          "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

          Please consult module avail TensorFlow for a list of installed versions.

          Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

          "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
          #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
          "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

          Please consult module avail AlphaFold for a list of installed versions.

          For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

          "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

          In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

          "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

          The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

          This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

          Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

          • Interactive jobs (see chapter\u00a0Running interactive jobs)

          • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

          • Jobs requiring few resources

          • Debugging programs

          • Testing and debugging job scripts

          "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

          To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

          module swap cluster/donphan\n

          Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

          "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

          Some limits are in place for this cluster:

          • each user may have at most 5 jobs in the queue (both running and waiting to run);

          • at most 3 jobs per user can be running at the same time;

          • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

          In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

          Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

          "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

          Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

          All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

          "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

          \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

          While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

          A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

          The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

          Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

          Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

          "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

          The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

          The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

          The HPC currently consists of:

          a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

          Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

          "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

          The HPC infrastructure is not a magic computer that automatically:

          1. runs your PC-applications much faster for bigger problems;

          2. develops your applications;

          3. solves your bugs;

          4. does your thinking;

          5. ...

          6. allows you to play games even faster.

          The HPC does not replace your desktop computer.

          "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

          Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

          It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

          "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

          In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

          Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

          "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

          Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

          Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

          The two parallel programming paradigms most used in HPC are:

          • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

          • MPI for distributed memory systems (multiprocessing): on multiple nodes

          Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

          "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

          Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

          It is perfectly possible to also run purely sequential programs on the HPC.

          Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

          "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

          You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

          Supported and commonly used compilers are GCC and Intel.

          Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

          "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

          All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

          Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

          A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

          "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

          A typical workflow looks like:

          1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

          2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

          3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

          4. Create a job script and submit your job (see Running batch jobs)

          5. Get some coffee and be patient:

            1. Your job gets into the queue

            2. Your job gets executed

            3. Your job finishes

          6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

          "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

          When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

          Do not hesitate to contact the HPC staff for any help.

          1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

          "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

          This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

          • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

          • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

          • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

          • To use a situational parameter, remove one '#' at the beginning of the line.

          simple_jobscript.sh
          #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
          "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

          Here's an example of a single-core job script:

          single_core.sh
          #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
          1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

          2. A module for Python 3.6 is loaded, see also section Modules.

          3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

          4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

          5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

          "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

          Here's an example of a multi-core job script that uses mympirun:

          multi_core.sh
          #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

          An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

          "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

          If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

          This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

          timeout.sh
          #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

          The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

          example_program.sh
          #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
          "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

          A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

          "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

          Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

          After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

          When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

          and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

          This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

          "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

          A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

          To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

          We can see all available versions of the SciPy module by using module avail SciPy-bundle:

          $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

          Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

          Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

          $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

          The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

          It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
          This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

          If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

          $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

          Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

          "}, {"location": "known_issues/", "title": "Known issues", "text": "

          This page provides details on a couple of known problems, and the workarounds that are available for them.

          If you have any questions related to these issues, please contact the HPC-UGent team.

          • Operation not permitted error for MPI applications
          "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

          When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

          Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

          This error means that an internal problem has occurred in OpenMPI.

          "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

          This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

          It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

          "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

          We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

          "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

          A workaround as been implemented in mympirun (version 5.4.0).

          Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

          module load vsc-mympirun\n

          and launch your MPI application using the mympirun command.

          For more information, see the mympirun documentation.

          "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

          If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

          export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
          "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

          We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

          "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

          There are two important motivations to engage in parallel programming.

          1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

          2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

          On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

          There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

          Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

          Tip

          You can request more nodes/cores by adding following line to your run script.

          #PBS -l nodes=2:ppn=10\n
          This queues a job that claims 2 nodes and 10 cores.

          Warning

          Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

          Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

          This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

          Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

          Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

          Go to the example directory:

          cd ~/examples/Multi-core-jobs-Parallel-Computing\n

          Note

          If the example directory is not yet present, copy it to your home directory:

          cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          Study the example first:

          T_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

          And compile it (whilst including the thread library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Now, run it on the cluster and check the output:

          $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

          Tip

          If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

          OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

          An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

          Here is the general code structure of an OpenMP program:

          #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

          "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

          By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

          "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

          Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

          omp1.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

          Now run it in the cluster and check the result again.

          $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
          "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

          Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

          omp2.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

          Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

          omp3.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

          And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

          $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

          Now run it in the cluster and check the result again.

          $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
          "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

          There are a host of other directives you can issue using OpenMP.

          Some other clauses of interest are:

          1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

          2. nowait: threads will not wait until everybody is finished

          3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

          4. if: allows you to parallelise only if a certain condition is met

          5. ...\u00a0and a host of others

          Tip

          If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

          "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

          The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

          In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

          The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

          The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

          One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

          Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

          Study the MPI-programme and the PBS-file:

          mpi_hello.c
          /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
          mpi_hello.pbs
          #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

          and compile it:

          $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

          mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

          Run the parallel program:

          $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

          The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

          MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

          Tip

          mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

          You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

          Tip

          If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

          "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

          A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

          Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

          These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

          One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

          The \"Worker framework\" has been developed to address this issue.

          It can handle many small jobs determined by:

          parameter variations

          i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

          job arrays

          i.e., each individual job got a unique numeric identifier.

          Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

          However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

          "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/par_sweep\n

          Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

          $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

          For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

          par_sweep/weather
          #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

          A job script that would run this as a job for the first parameters (p01) would then look like:

          par_sweep/weather_p01.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

          When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

          To submit the job, the user would use:

           $ qsub weather_p01.pbs\n
          However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

          $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

          It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

          In order to make our PBS generic, the PBS file can be modified as follows:

          par_sweep/weather.pbs
          #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

          Note that:

          1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

          2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

          3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

          The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

          The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

          $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

          Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

          Warning

          When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

          module swap env/slurm/donphan\n

          instead of

          module swap cluster/donphan\n
          We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

          "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

          First go to the right directory:

          cd ~/examples/Multi-job-submission/job_array\n

          As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

          The following bash script would submit these jobs all one by one:

          #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

          This, as said before, could be disturbing for the job scheduler.

          Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

          Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

          The details are

          1. a job is submitted for each number in the range;

          2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

          3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

          The job could have been submitted using:

          qsub -t 1-100 my_prog.pbs\n

          The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

          To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

          A typical job script for use with job arrays would look like this:

          job_array/job_array.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

          Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

          $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

          For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

          job_array/test_set
          #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

          Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

          job_array/test_set.pbs
          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

          Note that

          1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

          2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

          The job is now submitted as follows:

          $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

          The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

          Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

          $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
          "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

          Often, an embarrassingly parallel computation can be abstracted to three simple steps:

          1. a preparation phase in which the data is split up into smaller, more manageable chunks;

          2. on these chunks, the same algorithm is applied independently (these are the work items); and

          3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

          The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

          cd ~/examples/Multi-job-submission/map_reduce\n

          The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

          First study the scripts:

          map_reduce/pre.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
          map_reduce/post.sh
          #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

          Then one can submit a MapReduce style job as follows:

          $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

          Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

          "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

          The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

          The \"Worker Framework\" will be effective when

          1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

          2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

          "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

          Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

          tail -f run.pbs.log123456\n

          Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

          watch -n 60 wsummarize run.pbs.log123456\n

          This will summarise the log file every 60 seconds.

          "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

          Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

          Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

          Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

          "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

          Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

          wresume -jobid 123456\n

          This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

          wresume -l walltime=1:30:00 -jobid 123456\n

          Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

          wresume -jobid 123456 -retry\n

          By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

          "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

          This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

          $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
          "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

          When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

          to check for the available versions of worker, use the following command:

          $ module avail worker\n
          1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

          "}, {"location": "mympirun/", "title": "Mympirun", "text": "

          mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

          In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

          "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

          Before using mympirun, we first need to load its module:

          module load vsc-mympirun\n

          As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

          The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

          For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

          "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

          There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

          By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

          "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

          This is the most commonly used option for controlling the number of processing.

          The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

          $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
          "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

          There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

          See vsc-mympirun README for a detailed explanation of these options.

          "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

          You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

          $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
          "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

          In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

          "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

          There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

          • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

            • see also http://openfoam.com/history/
          • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

            • see also https://openfoam.org/download/history/
          • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

          Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

          "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

          The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

          • OpenFOAM websites:

            • https://openfoam.com

            • https://openfoam.org

            • http://wikki.gridcore.se/foam-extend

          • OpenFOAM user guides:

            • https://www.openfoam.com/documentation/user-guide

            • https://cfd.direct/openfoam/user-guide/

          • OpenFOAM C++ source code guide: https://cpp.openfoam.org

          • tutorials: https://wiki.openfoam.com/Tutorials

          • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

          Other useful OpenFOAM documentation:

          • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

          • http://www.dicat.unige.it/guerrero/openfoam.html

          "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

          To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

          "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

          First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

          $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

          To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

          To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

          module load OpenFOAM/11-foss-2023a\n
          "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

          OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

          source $FOAM_BASH\n
          "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

          If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

          source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

          Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

          "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

          If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

          unset $FOAM_SIGFPE\n

          Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

          As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

          "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

          The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

          • generate the mesh;

          • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

          After running the simulation, some post-processing steps are typically performed:

          • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

          • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

          Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

          Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

          One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

          For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

          "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

          For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

          "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

          When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

          You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

          "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

          It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

          See Basic usage for how to get started with mympirun.

          To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

          export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

          Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

          "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

          To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

          Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

          number of processor directories = 4 is not equal to the number of processors = 16\n

          In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

          • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

          • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

          See Controlling number of processes to control the number of process mympirun will start.

          This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

          To visualise the processor domains, use the following command:

          mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

          and then load the VTK files generated in the VTK folder into ParaView.

          "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

          OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

          Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

          • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

          • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

          • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

          • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

          • if the results per individual time step are large, consider setting writeCompression to true;

          For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

          For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

          These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

          "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

          See https://cfd.direct/openfoam/user-guide/compiling-applications/.

          "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

          Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

          OpenFOAM_damBreak.sh
          #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
          "}, {"location": "program_examples/", "title": "Program examples", "text": "

          If you have not done so already copy our examples to your home directory by running the following command:

           cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          ~(tilde) refers to your home directory, the directory you arrive by default when you login.

          Go to our examples:

          cd ~/examples/Program-examples\n

          Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

          1. 01_Python

          2. 02_C_C++

          3. 03_Matlab

          4. 04_MPI_C

          5. 05a_OMP_C

          6. 05b_OMP_FORTRAN

          7. 06_NWChem

          8. 07_Wien2k

          9. 08_Gaussian

          10. 09_Fortran

          11. 10_PQS

          The above 2 OMP directories contain the following examples:

          C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

          Compile by any of the following commands:

          Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

          Be invited to explore the examples.

          "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

          Remember to substitute the usernames, login nodes, file names, ...for your own.

          Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

          Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

          "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

          Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

          This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

          It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

          "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

          As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

          For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

          $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

          Initially there will be only one RHEL 9 login node. As needed a second one will be added.

          When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

          "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

          To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

          This includes (per user):

          • max. of 2 CPU cores in use
          • max. 8 GB of memory in use

          For more intensive tasks you can use the interactive and debug clusters through the web portal.

          "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

          The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

          However, there will be impact on the availability of software that is made available via modules.

          Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

          This includes all software installations on top of a compiler toolchain that is older than:

          • GCC(core)/12.3.0
          • foss/2023a
          • intel/2023a
          • gompi/2023a
          • iimpi/2023a
          • gfbf/2023a

          (or another toolchain with a year-based version older than 2023a)

          The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

          foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

          If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

          It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

          "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

          We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

          cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

          Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

          We will keep this page up to date when more specific dates have been planned.

          Warning

          This planning below is subject to change, some clusters may get migrated later than originally planned.

          Please check back regularly.

          "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

          If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

          "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

          In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

          When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

          The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

          "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

          Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

          "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

          The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

          All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

          "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

          In order to administer the active software and their environment variables, the module system has been developed, which:

          1. Activates or deactivates software packages and their dependencies.

          2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

          3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

          4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

          5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

          This is all managed with the module command, which is explained in the next sections.

          There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

          "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

          A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

          module available\n

          It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

          This will give some output such as:

          module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

          Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

          module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

          This gives a full list of software packages that can be loaded.

          The casing of module names is important: lowercase and uppercase letters matter in module names.

          "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

          The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

          Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

          E.g., foss/2024a is the first version of the foss toolchain in 2024.

          The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

          "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

          To \"activate\" a software package, you load the corresponding module file using the module load command:

          module load example\n

          This will load the most recent version of example.

          For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

          **However, you should specify a particular version to avoid surprises when newer versions are installed:

          module load secondexample/2.7-intel-2016b\n

          The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

          Modules need not be loaded one by one; the two module load commands can be combined as follows:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

          "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

          Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

          $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          You can also just use the ml command without arguments to list loaded modules.

          It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

          "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

          To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

          $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

          To unload the secondexample module, you can also use ml -secondexample.

          Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

          "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

          In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

          module purge\n
          This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

          "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

          Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

          Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

          module load example\n

          rather than

          module load example/1.2.3\n

          Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

          Consider the following example modules:

          $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

          Let's now generate a version conflict with the example module, and see what happens.

          $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

          Note: A module swap command combines the appropriate module unload and module load commands.

          "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

          With the module spider command, you can search for modules:

          $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

          It's also possible to get detailed information about a specific module:

          $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
          "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

          To get a list of all possible commands, type:

          module help\n

          Or to get more information about one specific module package:

          $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
          "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

          If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

          In each module command shown below, you can replace module with ml.

          First, load all modules you want to include in the collections:

          module load example/1.2.3 secondexample/2.7-intel-2016b\n

          Now store it in a collection using module save. In this example, the collection is named my-collection.

          module save my-collection\n

          Later, for example in a jobscript or a new session, you can load all these modules with module restore:

          module restore my-collection\n

          You can get a list of all your saved collections with the module savelist command:

          $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

          To get a list of all modules a collection will load, you can use the module describe command:

          $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

          To remove a collection, remove the corresponding file in $HOME/.lmod.d:

          rm $HOME/.lmod.d/my-collection\n
          "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

          To see how a module would change the environment, you can use the module show command:

          $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

          It's also possible to use the ml show command instead: they are equivalent.

          Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

          You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

          If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

          "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

          To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

          "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

          You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

          You can also get this information in text form (per cluster separately) with the pbsmon command:

          $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

          pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

          "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

          Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

          As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

          Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

          cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

          First go to the directory with the first examples by entering the command:

          cd ~/examples/Running-batch-jobs\n

          Each time you want to execute a program on the HPC you'll need 2 things:

          The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

          A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

          1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

          Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

          List and check the contents with:

          $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

          In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

          1. The Perl script calculates the first 30 Fibonacci numbers.

          2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

          We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

          On the command line, you would run this using:

          $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

          Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

          The job script contains a description of the job by specifying the command that need to be executed on the compute node:

          fibo.pbs
          #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

          This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

          $ qsub fibo.pbs\n123456\n

          The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

          Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

          To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

          Your job is now waiting in the queue for a free workernode to start on.

          Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

          After your job was started, and ended, check the contents of the directory:

          $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

          Explore the contents of the 2 new files:

          $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

          These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

          "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

          In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

          The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

          • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

          • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

          • Time waiting in queue: queued jobs get a higher priority over time.

          • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

          • Whether or not you are a member of a Virtual Organisation (VO).

            Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

            If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

            See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

          Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

          It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

          Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

          "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

          To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

          By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

          module swap cluster/donphan\n

          Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

          To list the available cluster modules, you can use the module avail cluster/ command:

          $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

          As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

          The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

          "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

          It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

          To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

          Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

          env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

          We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

          We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

          "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

          Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

          qstat 12345\n

          To show on which compute nodes your job is running, at least, when it is running:

          qstat -n 12345\n

          To remove a job from the queue so that it will not run, or to stop a job that is already running.

          qdel 12345\n

          When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

          $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

          Here:

          Job ID the job's unique identifier

          Name the name of the job

          User the user that owns the job

          Time Use the elapsed walltime for the job

          Queue the queue the job is in

          The state S can be any of the following:

          State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

          User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

          "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

          There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

          "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

          Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

          It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

          "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

          The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

          qsub -l walltime=2:30:00 ...\n

          For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

          The maximum walltime for HPC-UGent clusters is 72 hours.

          If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

          qsub -l mem=4gb ...\n

          The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

          The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

          qsub -l nodes=5:ppn=2 ...\n

          The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

          qsub -l nodes=1:westmere\n

          The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

          These options can either be specified on the command line, e.g.

          qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

          or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

          #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

          Note that the resources requested on the command line will override those specified in the PBS file.

          "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

          At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

          When you navigate to that directory and list its contents, you should see them:

          $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

          In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

          Inspect the generated output and error files:

          $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
          "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

          You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

          #PBS -m b \n#PBS -m e \n#PBS -m a\n

          or

          #PBS -m abe\n

          These options can also be specified on the command line. Try it and see what happens:

          qsub -m abe fibo.pbs\n

          The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

          qsub -m b -M john.smith@example.com fibo.pbs\n

          will send an e-mail to john.smith@example.com when the job begins.

          "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

          If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

          So the following example might go wrong:

          $ qsub job1.sh\n$ qsub job2.sh\n

          You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

          $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

          afterok means \"After OK\", or in other words, after the first job successfully completed.

          It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

          1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

          "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

          Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

          Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

          Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

          The syntax for qsub for submitting an interactive PBS job is:

          $ qsub -I <... pbs directives ...>\n
          "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

          Tip

          Find the code in \"~/examples/Running_interactive_jobs\"

          First of all, in order to know on which computer you're working, enter:

          $ hostname -f\ngligar07.gastly.os\n

          This means that you're now working on the login node gligar07.gastly.os of the cluster.

          The most basic way to start an interactive job is the following:

          $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

          There are two things of note here.

          1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

          2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

          In order to know on which compute-node you're working, enter again:

          $ hostname -f\nnode3501.doduo.gent.vsc\n

          Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

          Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

          $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

          You can exit the interactive session with:

          $ exit\n

          Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

          You can work for 3 hours by:

          qsub -I -l walltime=03:00:00\n

          If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

          "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

          To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

          The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

          "}, {"location": "running_interactive_jobs/#install-xming", "title": "Install Xming", "text": "

          The first task is to install the Xming software.

          1. Download the Xming installer from the following address: http://www.straightrunning.com/XmingNotes/. Either download Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.

          2. Run the Xming setup program on your Windows desktop.

          3. Keep the proposed default folders for the Xming installation.

          4. When selecting the components that need to be installed, make sure to select \"XLaunch wizard\" and \"Normal PuTTY Link SSH client\".

          5. We suggest to create a Desktop icon for Xming and XLaunch.

          6. And Install.

          And now we can run Xming:

          1. Select XLaunch from the Start Menu or by double-clicking the Desktop icon.

          2. Select Multiple Windows. This will open each application in a separate window.

          3. Select Start no client to make XLaunch wait for other programs (such as PuTTY).

          4. Select Clipboard to share the clipboard.

          5. Finally Save configuration into a file. You can keep the default filename and save it in your Xming installation directory.

          6. Now Xming is running in the background ... and you can launch a graphical application in your PuTTY terminal.

          7. Open a PuTTY terminal and connect to the HPC.

          8. In order to test the X-server, run \"xclock\". \"xclock\" is the standard GUI clock for the X Window System.

          xclock\n

          You should see the XWindow clock application appearing on your Windows machine. The \"xclock\" application runs on the login-node of the HPC, but is displayed on your Windows machine.

          You can close your clock and connect further to a compute node with again your X-forwarding enabled:

          $ qsub -I -X\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n$ hostname -f\nnode3501.doduo.gent.vsc\n$ xclock\n

          and you should see your clock again.

          "}, {"location": "running_interactive_jobs/#ssh-tunnel", "title": "SSH Tunnel", "text": "

          In order to work in client/server mode, it is often required to establish an SSH tunnel between your Windows desktop machine and the compute node your job is running on. PuTTY must have been installed on your computer, and you should be able to connect via SSH to the HPC cluster's login node.

          Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunnelling.

          There are several cases where this is useful:

          1. Running graphical applications on the cluster: The graphical program cannot directly communicate with the X Window server on your local system. In this case, the tunnelling is easy to set up as PuTTY will do it for you if you select the right options on the X11 settings page as explained on the page about text-mode access using PuTTY.

          2. Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualisation mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. This scenario is explained on this page.

          3. Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.

          Procedure: A tunnel from a local client to a specific computer node on the cluster

          1. Log in on the login node via PuTTY.

          2. Start the server job, note the compute node's name the job is running on (e.g., node3501.doduo.gent.vsc), as well as the port the server is listening on (e.g., \"54321\").

          3. Set up the tunnel:

            1. Close your current PuTTY session.

            2. In the \"Category\" pane, expand Connection>SSh, and select as show below:

            3. In the Source port field, enter the local port to use (e.g., 5555).

            4. In the Destination field, enter : (e.g., node3501.doduo.gent.vsc:54321 as in the example above, these are the details you noted in the second step).

            5. Click the Add button.

            6. Click the Open button

            7. The tunnel is now ready to use.

              "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

              We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

              Now run the message program:

              cd ~/examples/Running_interactive_jobs\n./message.py\n

              You should see the following message appearing.

              Click any button and see what happens.

              -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
              "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

              You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

              "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

              First go to the directory:

              cd ~/examples/Running_jobs_with_input_output_data\n

              Note

              If the example directory is not yet present, copy it to your home directory:

              ```

              cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

              List and check the contents with:

              $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

              Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

              file1.py
              #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

              The code of the Python script, is self explanatory:

              1. In step 1, we write something to the file hello.txt in the current directory.

              2. In step 2, we write some text to stdout.

              3. In step 3, we write to stderr.

              Check the contents of the first job script:

              file1a.pbs
              #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

              You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

              Submit it:

              qsub file1a.pbs\n

              After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

              $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

              Some observations:

              1. The file Hello.txt was created in the current directory.

              2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

              3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

              Inspect their contents ...\u00a0and remove the files

              $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

              Tip

              Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

              "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

              Check the contents of the job script and execute it.

              file1b.pbs
              #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

              Inspect the contents again ...\u00a0and remove the generated files:

              $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

              Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

              "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

              You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

              file1c.pbs
              #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
              "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

              The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

              "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

              Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

              The following locations are available:

              Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

              Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

              We elaborate more on the specific function of these locations in the following sections.

              Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

              For documentation about VO directories, see the section on VO directories.

              "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

              Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

              The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

              The operating system also creates a few files and folders here to manage your account. Examples are:

              File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

              In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

              The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

              If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

              "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

              To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

              You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

              Each type of scratch has its own use:

              Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

              Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

              Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

              Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

              "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

              In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

              $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

              Now you should be able to access your files running

              $ ls /UGent/yourugentusername\nhome shares www\n

              Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

              If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

              kinit yourugentusername@UGENT.BE -r 7\n

              You can verify your authentication ticket and expiry dates yourself by running klist

              $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

              Your ticket is valid for 10 hours, but you can renew it before it expires.

              To renew your tickets, simply run

              kinit -R\n

              If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

              krenew -b -K 60\n

              Each hour the process will check if your ticket should be renewed.

              We strongly advise to disable access to your shares once it is no longer needed:

              kdestroy\n

              If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

              conda deactivate\n
              "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

              In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

              $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

              Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

              $ ssh globus01 destroy\nSuccesfully destroyed session\n
              "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

              Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

              To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

              The rules are:

              1. You will only receive a warning when you have reached the soft limit of either quota.

              2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

              3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

              We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

              "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

              Tip

              Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

              In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

              1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

              2. repeat this action 30.000 times;

              3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

              Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

              $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
              "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

              Tip

              Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

              In this exercise, you will

              1. Generate the file \"primes_1.txt\" again as in the previous exercise;

              2. open the file;

              3. read it line by line;

              4. calculate the average of primes in the line;

              5. count the number of primes found per line;

              6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

              Check the Python and the PBS file, and submit the job:

              $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
              "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

              The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

              • the amount of disk space; and

              • the number of files

              that can be made available to each individual HPC user.

              The quota of disk space and number of files for each HPC user is:

              Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

              Tip

              The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

              Tip

              If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

              "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

              You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

              VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

              To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

              Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

              $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

              This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

              If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

              $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

              If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

              $ du -s\n5632 .\n$ du -s -h\n

              If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

              $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

              Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

              $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
              "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

              Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

              Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

              To change the group of a directory and it's underlying directories and files, you can use:

              chgrp -R groupname directory\n
              "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
              1. Get the group name you want to belong to.

              2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

              3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

              "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
              1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

              2. Fill out the group name. This cannot contain spaces.

              3. Put a description of your group in the \"Info\" field.

              4. You will now be a member and moderator of your newly created group.

              "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

              Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

              "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

              You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

              $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

              We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

              "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

              A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

              "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
              1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

              2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

              3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

              "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
              1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

              2. Fill why you want to request a VO.

              3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

              4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

              5. If the request is approved, you will now be a member and moderator of your newly created VO.

              "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

              If you're a moderator of a VO, you can request additional quota for the VO and its members.

              1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

              2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

              3. Add a comment explaining why you need additional storage space and submit the form.

              4. An HPC administrator will review your request and approve or deny it.

              "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

              VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

              Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

              1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

              2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

              "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

              When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

              VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

              VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

              If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

              "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

              A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

              "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

              This section will explain how to create, activate, use and deactivate Python virtual environments.

              "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

              A Python virtual environment can be created with the following command:

              python -m venv myenv      # Create a new virtual environment named 'myenv'\n

              This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

              Warning

              When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

              "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

              To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

              source myenv/bin/activate                    # Activate the virtual environment\n
              "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

              After activating the virtual environment, you can install additional Python packages with pip install:

              pip install example_package1\npip install example_package2\n

              These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

              It is now possible to run Python scripts that use the installed packages in the virtual environment.

              Tip

              When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

              Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

              To check if a package is available as a module, use:

              module av package_name\n

              Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

              module show module_name\n

              to check which extensions are included in a module (if any).

              "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

              Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

              example.py
              import example_package1\nimport example_package2\n...\n
              python example.py\n
              "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

              When you are done using the virtual environment, you can deactivate it. To do that, run:

              deactivate\n
              "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

              You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

              pytorch_poutyne.py
              import torch\nimport poutyne\n\n...\n

              We load a PyTorch package as a module and install Poutyne in a virtual environment:

              module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

              While the virtual environment is activated, we can run the script without any issues:

              python pytorch_poutyne.py\n

              Deactivate the virtual environment when you are done:

              deactivate\n
              "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

              To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

              module swap cluster/donphan\nqsub -I\n

              After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

              Naming a virtual environment

              When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

              python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
              "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

              This section will combine the concepts discussed in the previous sections to:

              1. Create a virtual environment on a specific cluster.
              2. Combine packages installed in the virtual environment with modules.
              3. Submit a job script that uses the virtual environment.

              The example script that we will run is the following:

              pytorch_poutyne.py
              import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

              First, we create a virtual environment on the donphan cluster:

              module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

              Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

              jobscript.pbs
              #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

              Next, we submit the job script:

              qsub jobscript.pbs\n

              Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

              "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

              Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

              For example, if we create a virtual environment on the skitty cluster,

              module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

              return to the login node by pressing CTRL+D and try to use the virtual environment:

              $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

              we are presented with the illegal instruction error. More info on this here

              "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

              When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

              python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

              Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

              "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

              There are two main reasons why this error could occur.

              1. You have not loaded the Python module that was used to create the virtual environment.
              2. You loaded or unloaded modules while the virtual environment was activated.
              "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

              If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

              The following commands illustrate this issue:

              $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

              Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

              module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
              "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

              You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

              $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

              The solution is to only modify modules when not in a virtual environment.

              "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

              Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

              One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

              For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

              This documentation only covers aspects of using Singularity on the infrastructure.

              "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

              Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

              The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

              In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

              If these limitations are a problem for you, please let us know via .

              "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

              All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

              "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

              Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

              When you make Singularity or convert Docker images you have some restrictions:

              • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
              "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

              For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

              We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

              "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

              Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

              ::: prompt :::

              Create a job script like:

              Create an example myscript.sh:

              ::: code bash #!/bin/bash

              # prime factors factor 1234567 :::

              "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

              We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

              Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

              ::: prompt :::

              You can download linear_regression.py from the official Tensorflow repository.

              "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

              It is also possible to execute MPI jobs within a container, but the following requirements apply:

              • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

              • Use modules within the container (install the environment-modules or lmod package in your container)

              • Load the required module(s) before singularity execution.

              • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

              Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

              ::: prompt :::

              For example to compile an MPI example:

              ::: prompt :::

              Example MPI job script:

              "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

              The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

              As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

              In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

              In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

              • Title and nickname
              • Start and end date for your course or training
              • VSC-ids of all teachers/trainers
              • Participants based on UGent Course Code and/or list of VSC-ids
              • Optional information
                • Additional storage requirements
                  • Shared folder
                  • Groups folder for collaboration
                  • Quota
                • Reservation for resource requirements beyond the interactive cluster
                • Ticket number for specific software needed for your course/training
                • Details for a custom Interactive Application in the webportal

              In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

              Please make these requests well in advance, several weeks before the start of your course/workshop.

              "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

              The title of the course or training can be used in e.g. reporting.

              The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

              When choosing the nickname, try to make it unique, but this is not enforced nor checked.

              "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

              The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

              The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

              • Course group and subgroups will be deactivated
              • Residual data in the course directories will be archived or deleted
              • Custom Interactive Applications will be disabled
              "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

              A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

              This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

              Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

              "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

              The management of the list of students or participants depends if this is a UGent course or a training/workshop.

              "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

              Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

              The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

              Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

              A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

              "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

              (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

              "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

              For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

              This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

              Every course directory will always contain the folders:

              • input
                • ideally suited to distribute input data such as common datasets
                • moderators have read/write access
                • group members (students) only have read access
              • members
                • this directory contains a personal folder for every student in your course members/vsc<01234>
                • only this specific VSC-id will have read/write access to this folder
                • moderators have read access to this folder
              "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

              Optionally, we can also create these folders:

              • shared
                • this is a folder for sharing files between any and all group members
                • all group members and moderators have read/write access
                • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
              • groups
                • a number of groups/group_<01> folders are created under the groups folder
                • these folders are suitable if you want to let your students collaborate closely in smaller groups
                • each of these group_<01> folders are owned by a dedicated group
                • teachers are automatically made moderators of these dedicated groups
                • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
                • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

              If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

              • shared: yes
              • subgroups: <number of (sub)groups>
              "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

              There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

              • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
              • member quota (defaults 5 GB volume and 10k files) are per student/participant

              The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

              "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

              The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

              "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

              We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

              Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

              Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

              Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

              "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

              In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

              We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

              Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

              "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

              HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

              A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

              If you would like this for your course, provide more details in your teaching request, including:

              • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

              • which cluster you want to use

              • how many nodes/cores/GPUs are needed

              • which software modules you are loading

              • custom code you are launching (e.g. autostart a GUI)

              • required environment variables that you are setting

              • ...

              We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

              A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

              "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

              Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

              "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

              Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

              "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

              Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

              "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

              Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

              For example:

              $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

              "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

              Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

              See also the examples below.

              "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

              Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

              See also the examples below.

              "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

              The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

              example.sh:

              #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
              "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

              Running the following command:

              $ qsub --dryrun example.sh -N example\n

              will generate this output:

              Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
              This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

              With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

              "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

              Similarly to the --dryrun example, we start by running the following command:

              $ qsub --debug example.sh -N example\n

              which generates this output:

              DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
              The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

              "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

              Below is a list of the most common and useful directives.

              Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

              TORQUE-related environment variables in batch job scripts.

              # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

              IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

              When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

              Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

              Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

              "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

              When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

              To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

              Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

              Other reasons why using more cores may not lead to a (significant) speedup include:

              • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

              • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

              • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

              • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

              • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

              • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

              More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

              "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

              When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

              Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

              Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

              Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

              An example of how you can make beneficial use of multiple nodes can be found here.

              You can also use MPI in Python, some useful packages that are also available on the HPC are:

              • mpi4py
              • Boost.MPI

              We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

              "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

              If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

              If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

              "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

              If you get from your job output an error message similar to this:

              =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

              This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

              "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

              Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

              Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

              "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

              If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

              If you have errors that look like:

              vsc40000@login.hpc.ugent.be: Permission denied\n

              or you are experiencing problems with connecting, here is a list of things to do that should help:

              1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

              2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

              3. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

              4. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

              5. Make sure you are using the private key (not the public key) when trying to connect: If you followed the manual, the private key filename should end in .ppk (not in .pub).

              6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

              7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

              If you are using PuTTY and get this error message:

              server unexpectedly closed network connection\n

              it is possible that the PuTTY version you are using is too old and doesn't support some required (security-related) features.

              Make sure you are using the latest PuTTY version if you are encountering problems connecting (see Get PuTTY). If that doesn't help, please contact hpc@ugent.be.

              If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

              Please create a log file of your SSH session by following the steps in this article and include it in the email.

              "}, {"location": "troubleshooting/#change-putty-private-key-for-a-saved-configuration", "title": "Change PuTTY private key for a saved configuration", "text": "
              1. Open PuTTY

              2. Single click on the saved configuration

              3. Then click Load button

              4. Expand SSH category (on the left panel) clicking on the \"+\" next to SSH

              5. Click on Auth under the SSH category

              6. On the right panel, click Browse button

              7. Then search your private key on your computer (with the extension \".ppk\")

              8. Go back to the top of category, and click Session

              9. On the right panel, click on Save button

              "}, {"location": "troubleshooting/#check-whether-your-private-key-in-putty-matches-the-public-key-on-the-accountpage", "title": "Check whether your private key in PuTTY matches the public key on the accountpage", "text": "

              Follow the instructions in Change PuTTY private key for a saved configuration util item 5, then:

              1. Single click on the textbox containing the path to your private key, then select all text (push Ctrl + a ), then copy the location of the private key (push Ctrl + c)

              2. Open PuTTYgen

              3. Enter menu item \"File\" and select \"Load Private key\"

              4. On the \"Load private key\" popup, click in the textbox next to \"File name:\", then paste the location of your private key (push Ctrl + v), then click Open

              5. Make sure that your Public key from the \"Public key for pasting into OpenSSH authorized_keys file\" textbox is in your \"Public keys\" section on the accountpage https://account.vscentrum.be. (Scroll down to the bottom of \"View Account\" tab, you will find there the \"Public keys\" section)

              "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

              If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

              You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

              - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

              Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

              If the fingerprint matches, click \"Yes\".

              If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@ugent.be.

              Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

              If you use X2Go client, you might get one of the following fingerprints:

              • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
              • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
              • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

              If you get a message \"Host key for server changed\", do not click \"No\" until you verified the fingerprint.

              If the fingerprint matches, click \"No\", and in the next pop-up screen (\"if you accept the new host key...\"), press \"Yes\".

              If it doesn't, or you are in doubt, take a screenshot, press \"Yes\" and contact hpc@ugent.be.

              "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

              If you get errors like:

              $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

              or

              sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

              It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

              "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

              If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

              The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

              Make sure the fingerprint in the alert matches one of the following:

              - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

              If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

              Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

              If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

              • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
              • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
              • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c
              "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

              To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

              Note

              Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

              "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

              If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

              Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

              You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

              "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

              See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

              "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

              Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

              $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

              This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

              To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

              As a rule of thumb, toolchains in the same row are compatible with each other:

              GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

              Example

              we could load the following modules together:

              ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

              Another common error is:

              $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

              This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

              "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

              When running software provided through modules (see Modules), you may run into errors like:

              $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

              or errors like:

              $ python\nIllegal instruction\n

              When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

              If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

              If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

              $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

              This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

              "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

              When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

              $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

              When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

              The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

              Tip

              To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

              $ module swap env/slurm/donphan \n
              instead of
              $ module swap cluster/donphan \n

              We recommend using a module swap cluster command after submitting the jobs.

              This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

              "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

              All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

              vsc40000@ln01[203] $\n

              When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

              Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

              Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

              $ echo This is a test\nThis is a test\n

              Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

              More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

              $ ls --help \n$ man ls\n$ info ls\n

              (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

              "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

              In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

              Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

              Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

              Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

              echo \"Hello! This is my hostname:\" \nhostname\n

              You can type both lines at your shell prompt, and the result will be the following:

              $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

              Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

              nano foo\n

              or use the following commands:

              echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

              The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

              $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

              Congratulations, you just created and started your first shell script!

              A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

              You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

              $ which bash\n/bin/bash\n

              We edit our script and change it with this information:

              #!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

              Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

              Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

              chmod +x foo\n

              Now you can start your script by simply executing it:

              $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

              The same technique can be used for all other scripting languages, like Perl and Python.

              Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

              "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

              The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

              Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

              To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

              Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

              Through this web portal, you can:

              • browse through the files & directories in your VSC account, and inspect, manage or change them;

              • consult active jobs (across all HPC-UGent Tier-2 clusters);

              • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

              • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

              • open a terminal session directly in your web browser;

              More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

              "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

              All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

              "}, {"location": "web_portal/#login", "title": "Login", "text": "

              When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

              "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

              The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

              Please click \"Authorize\" here.

              This request will only be made once, you should not see this again afterwards.

              "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

              Once logged in, you should see this start page:

              This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

              If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

              "}, {"location": "web_portal/#features", "title": "Features", "text": "

              We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

              "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

              Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

              The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

              Here you can:

              • Click a directory in the tree view on the left to open it;

              • Use the buttons on the top to:

                • go to a specific subdirectory by typing in the path (via Go To...);

                • open the current directory in a terminal (shell) session (via Open in Terminal);

                • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

                • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

                • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

                • show the owner and permissions in the file listing (via Show Owner/Mode);

              • Double-click a directory in the file listing to open that directory;

              • Select one or more files and/or directories in the file listing, and:

                • use the View button to see the contents (use the button at the top right to close the resulting popup window);

                • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

                • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

                • use the Download button to download the selected files and directories from your VSC account to your local workstation;

                • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

                • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

                • use the Delete button to (permanently!) remove the selected files and directories;

              For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

              "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

              Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

              For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

              "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

              To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

              A new browser tab will be opened that shows all your current queued and/or running jobs:

              You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

              Jobs that are still queued or running can be deleted using the red button on the right.

              Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

              For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

              "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

              To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

              This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

              You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

              Don't forget to actually submit your job to the system via the green Submit button!

              "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

              In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

              "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

              Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

              Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

              To exit the shell session, type exit followed by Enter and then close the browser tab.

              Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

              "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

              To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

              You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

              Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

              To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

              "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

              See dedicated page on Jupyter notebooks

              "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

              In case of problems with the web portal, it could help to restart the web server running in your VSC account.

              You can do this via the Restart Web Server button under the Help menu item:

              Of course, this only affects your own web portal session (not those of others).

              "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
              • ABAQUS for CAE course
              "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

              X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

              1. A graphical remote desktop that works well over low bandwidth connections.

              2. Copy/paste support from client to server and vice-versa.

              3. File sharing from client to server.

              4. Support for sound.

              5. Printer sharing from client to server.

              6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

              "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

              X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

              X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

              "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

              After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

              There are two ways to connect to the login node:

              • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

              • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

              "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

              This is the easier way to setup X2Go, a direct connection to the login node.

              1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

              2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

              3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

              4. Set the SSH port (22 by default).

              5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

                1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

                  You should look for your private SSH key generated by puttygen exported in \"OpenSSH\" format in Generating a public/private key pair (by default \"id_rsa\" (and not the \".ppk\" version)). Choose that file and click on open .

              6. Check \"Try autologin\" option.

              7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

                1. [optional]: Set a single application like Terminal instead of XFCE desktop. This option is much better than PuTTY because the X2Go client includes copy-pasting support.

              8. [optional]: Change the session icon.

              9. Click the OK button after these changes.

              "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

              This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

              1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

              2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

              3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

                1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

                2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

                3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

                4. Click the OK button after these changes.

              "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

              Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

              X2Go will keep the session open for you (but only if the login node is not rebooted).

              "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

              If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

              hostname\n

              This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

              "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

              If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

              "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

              The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

              To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

              Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

              After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

              Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

              "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

              TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

              Loads MNIST datasets and trains a neural network to recognize hand-written digits.

              Runtime: ~1 min. on 8 cores (Intel Skylake)

              See https://www.tensorflow.org/tutorials/quickstart/beginner

              "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

              Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

              These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

              The guide aims to make you familiar with the Linux command line environment quickly.

              The tutorial goes through the following steps:

              1. Getting Started
              2. Navigating
              3. Manipulating files and directories
              4. Uploading files
              5. Beyond the basics

              Do not forget Common pitfalls, as this can save you some troubleshooting.

              "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
              • More on the HPC infrastructure.
              • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
              "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

              Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

              "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

              To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

              First, it's important to make a distinction between two different output channels:

              1. stdout: standard output channel, for regular output

              2. stderr: standard error channel, for errors and warnings

              "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

              > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

              $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

              >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

              $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

              < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

              One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

              $ find . -name .txt > files\n$ xargs grep banana < files\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

              To redirect the stderr output (warnings, messages), you can use 2>, just like >

              $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

              To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

              $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

              Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

              $ ls | wc -l\n    42\n

              A common pattern is to pipe the output of a command to less so you can examine or search the output:

              $ find . | less\n

              Or to look through your command history:

              $ history | less\n

              You can put multiple pipes in the same line. For example, which cp commands have we run?

              $ history | grep cp | less\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

              The shell will expand certain things, including:

              1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

              2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

              3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

              4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

              "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

              ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

              $ ps -fu $USER\n

              To see all the processes:

              $ ps -elf\n

              To see all the processes in a forest view, use:

              $ ps auxf\n

              The last two will spit out a lot of data, so get in the habit of piping it to less.

              pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

              pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

              "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

              ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

              $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

              Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

              $ kill -9 1234\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

              top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

              To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

              There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

              To exit top, use q (for 'quit').

              For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

              "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

              ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

              $ ulimit -a\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

              To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

              $ wc example.txt\n      90     468     3189   example.txt\n

              The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

              To only count the number of lines, use wc -l:

              $ wc -l example.txt\n      90    example.txt\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

              grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

              $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

              grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

              "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

              cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

              $ cut -f 1 -d ',' mydata.csv\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

              sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

              $ sed 's/oldtext/newtext/g' myfile.txt\n

              By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

              "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

              awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

              First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

              $ awk '{print $4}' mydata.dat\n

              You can use -F ':' to change the delimiter (F for field separator).

              The next example is used to sum numbers from a field:

              $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

              The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

              However, there are some rules you need to abide by.

              Here is a very detailed guide should you need more information.

              "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

              The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

              #!/bin/sh\n
              #!/bin/bash\n
              #!/usr/bin/env bash\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

              Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

              if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
              Or only if a certain variable is bigger than one:
              if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
              Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

              In the initial example, we used -d to test if a directory existed. There are several more checks.

              Another useful example, is to test if a variable contains a value (so it's not empty):

              if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

              the -z will check if the length of the variable's value is greater than zero.

              "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

              Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

              Let's look at a simple example:

              for i in 1 2 3\ndo\necho $i\ndone\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

              Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

              CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

              In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

              "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

              Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

              Firstly a useful thing to know for debugging and testing is that you can run any command like this:

              command 2>&1 output.log   # one single output file, both output and errors\n

              If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

              If you want regular and error output separated you can use:

              command > output.log 2> output.err  # errors in a separate file\n

              this will write regular output to output.log and error output to output.err.

              You can then look for the errors with less or search for specific text with grep.

              In scripts, you can use:

              set -e\n

              This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

              "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

              Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

              command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

              "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

              If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

              Examples include:

              • modifying your $PS1 (to tweak your shell prompt)

              • printing information about the current/jobs environment (echoing environment variables, etc.)

              • selecting a specific cluster to run on with module swap cluster/...

              Some recommendations:

              • Avoid using module load statements in your $HOME/.bashrc file

              • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

              "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

              When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

              "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
              #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
              "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

              The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

              This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

              #PBS -l nodes=1:ppn=1 # single-core\n

              For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

              #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

              We intend to submit it on the long queue:

              #PBS -q long\n

              We request a total running time of 48 hours (2 days).

              #PBS -l walltime=48:00:00\n

              We specify a desired name of our job:

              #PBS -N FreeSurfer_per_subject-time-longitudinal\n
              This specifies mail options:
              #PBS -m abe\n

              1. a means mail is sent when the job is aborted.

              2. b means mail is sent when the job begins.

              3. e means mail is sent when the job ends.

              Joins error output with regular output:

              #PBS -j oe\n

              All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

              "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
              1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

              2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

              3. How many files and directories are in /tmp?

              4. What's the name of the 5th file/directory in alphabetical order in /tmp?

              5. List all files that start with t in /tmp.

              6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

              7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

              "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

              This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

              "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

              If you receive an error message which contains something like the following:

              No such file or directory\n

              It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

              Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

              "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

              Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

              $ cat some file\nNo such file or directory 'some'\n

              Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

              $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

              This is especially error-prone if you are piping results of find:

              $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

              This can be worked around using the -print0 flag:

              $ find . -type f -print0 | xargs -0 cat\n...\n

              But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

              "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

              If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

              $ rm -r ~/$PROJETC/*\n

              "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

              A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

              $ #rm -r ~/$POROJETC/*\n
              Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

              "}, {"location": "linux-tutorial/common_pitfalls/#copying-files-with-winscp", "title": "Copying files with WinSCP", "text": "

              After copying files from a windows machine, a file might look funny when looking at it on the cluster.

              $ cat script.sh\n#!/bin/bash^M\n#PBS -l nodes^M\n...\n

              Or you can get errors like:

              $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

              See section dos2unix to fix these errors with dos2unix.

              "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
              $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

              Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

              $ chmod +x script_name.sh\n

              "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

              If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

              If you need help about a certain command, you should consult its so-called \"man page\":

              $ man command\n

              This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

              Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

              "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
              1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

              2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

              3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

              4. basic shell usage

              5. Bash for beginners

              6. MOOC

              Please don't hesitate to contact in case of questions or problems.

              "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

              To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

              You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

              Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

              "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

              To get help:

              1. use the documentation available on the system, through the help, info and man commands (use q to exit).
                help cd \ninfo ls \nman cp \n
              2. use Google

              3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

              "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

              Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

              "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

              The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

              You use the shell by executing commands, and hitting <enter>. For example:

              $ echo hello \nhello \n

              You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

              To go through previous commands, use <up> and <down>, rather than retyping them.

              "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

              A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

              $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

              "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

              If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

              "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

              At the prompt we also have access to shell variables, which have both a name and a value.

              They can be thought of as placeholders for things we need to remember.

              For example, to print the path to your home directory, we can use the shell variable named HOME:

              $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

              This prints the value of this variable.

              "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

              There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

              For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

              $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

              You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

              $ env | sort | grep VSC\n

              But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

              $ export MYVARIABLE=\"value\"\n

              It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

              If we then do

              $ echo $MYVARIABLE\n

              this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

              "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

              You can change what your prompt looks like by redefining the special-purpose variable $PS1.

              For example: to include the current location in your prompt:

              $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

              Note that ~ is short representation of your home directory.

              To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

              $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

              "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

              One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

              This may lead to surprising results, for example:

              $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

              To understand what's going on here, see the section on cd below.

              The moral here is: be very careful to not use empty variables unintentionally.

              Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

              The -e option will result in the script getting stopped if any command fails.

              The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

              More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

              "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

              If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

              "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

              Basic information about the system you are logged into can be obtained in a variety of ways.

              We limit ourselves to determining the hostname:

              $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

              And querying some basic information about the Linux kernel:

              $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

              "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
              • Print the full path to your home directory
              • Determine the name of the environment variable to your personal scratch directory
              • What's the name of the system you\\'re logged into? Is it the same for everyone?
              • Figure out how to print the value of a variable without including a newline
              • How do you get help on using the man command?

              Next chapter teaches you on how to navigate.

              "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

              Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

              "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

              If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

              "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

              Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

              To figure out where your quota is being spent, the du (isk sage) command can come in useful:

              $ du -sh test\n59M test\n

              Do not (frequently) run du on directories where large amounts of data are stored, since that will:

              1. take a long time

              2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

              "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

              Software is provided through so-called environment modules.

              The most commonly used commands are:

              1. module avail: show all available modules

              2. module avail <software name>: show available modules for a specific software name

              3. module list: show list of loaded modules

              4. module load <module name>: load a particular module

              More information is available in section Modules.

              "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

              The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

              Detailed information is available in section submitting your job.

              "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

              Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

              Hint: python -c \"print(sum(range(1, 101)))\"

              • How many modules are available for Python version 3.6.4?
              • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
              • Which cluster modules are available?

              • What's the full path to your personal home/data/scratch directories?

              • Determine how large your personal directories are.
              • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
              "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

              Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

              To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

              $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

              To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
              $ cp source target\n

              This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

              $ cp -r sourceDirectory target\n

              A last more complicated example:

              $ cp -a sourceDirectory target\n

              Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
              $ mkdir directory\n

              which will create a directory with the given name inside the current directory.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
              $ mv source target\n

              mv will move the source path to the destination path. Works for both directories as files.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

              Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

              $ rm filename\n
              rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

              You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

              $ rmdir directory\n

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

              Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

              1. User - a particular user (account)

              2. Group - a particular group of users (may be user-specific group with only one member)

              3. Other - other users in the system

              The permission types are:

              1. Read - For files, this gives permission to read the contents of a file

              2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

              3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

              Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

              $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

              Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

              The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

              Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

              $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

              The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

              You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

              You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

              However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

              $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

              This will give the user otheruser permissions to write to Project_GoldenDragon

              Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

              Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

              See https://linux.die.net/man/1/setfacl for more information.

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

              Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

              $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

              Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

              $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

              Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

              $ unzip myfile.zip\n

              If we would like to make our own zip archive, we use zip:

              $ zip myfiles.zip myfile1 myfile2 myfile3\n

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

              Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

              You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

              $ tar -xf tarfile.tar\n

              Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

              $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

              Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

              # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

              If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

              $ tar -c source1 source2 source3 -f tarfile.tar\n
              "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
              1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

              2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

              3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

              4. Remove the another/test directory with a single command.

              5. Rename test to test2. Move test2/hostname.txt to your home directory.

              6. Change the permission of test2 so only you can access it.

              7. Create an empty job script named job.sh, and make it executable.

              8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

              The next chapter is on uploading files, especially important when using HPC-infrastructure.

              "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

              This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

              "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

              To print the current directory, use pwd or \\$PWD:

              $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

              "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

              A very basic and commonly used command is ls, which can be used to list files and directories.

              In its basic usage, it just prints the names of files and directories in the current directory. For example:

              $ ls\nafile.txt some_directory \n

              When provided an argument, it can be used to list the contents of a directory:

              $ ls some_directory \none.txt two.txt\n

              A couple of commonly used options include:

              • detailed listing using ls -l:

                $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
              • To print the size information in human-readable form, use the -h flag:

                $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
              • also listing hidden files using the -a flag:

                $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
              • ordering files by the most recent change using -rt:

                $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

              If you try to use ls on a file that doesn't exist, you will get a clear error message:

              $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
              "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

              To change to a different directory, you can use the cd command:

              $ cd some_directory\n

              To change back to the previous directory you were in, there's a shortcut: cd -

              Using cd without an argument results in returning back to your home directory:

              $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

              "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

              The file command can be used to inspect what type of file you're dealing with:

              $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
              "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

              An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

              Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

              A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

              Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

              There are two special relative paths worth mentioning:

              • . is a shorthand for the current directory
              • .. is a shorthand for the parent of the current directory

              You can also use .. when constructing relative paths, for example:

              $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
              "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

              Each file and directory has particular permissions set on it, which can be queried using ls -l.

              For example:

              $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

              The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

              1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
              2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
              3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
              4. the 3rd part r-- indicates that other users only have read permissions

              The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

              1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
              2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

              See also the chmod command later in this manual.

              "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

              find will crawl a series of directories and lists files matching given criteria.

              For example, to look for the file named one.txt:

              $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

              To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

              $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

              A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

              "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
              • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
              • When was your home directory created or last changed?
              • Determine the name of the last changed file in /tmp.
              • See how home directories are organised. Can you access the home directory of other users?

              The next chapter will teach you how to interact with files and directories.

              "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

              To transfer files from and to the HPC, see the section about transferring files of the HPC manual

              "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

              After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

              For example, you may see an error when submitting a job script that was edited on Windows:

              sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

              To fix this problem, you should run the dos2unix command on the file:

              $ dos2unix filename\n
              "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

              As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop and they look like directories in WinSCP) pointing to the respective storages:

              $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
              "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

              Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

              1. Open (\"Read\"): ^R

              2. Save (\"Write Out\"): ^O

              3. Exit: ^X

              More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

              "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

              rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

              You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

              For example, to copy a folder with lots of CSV files:

              $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

              will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

              The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

              To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

              To copy files to your local computer, you can also use rsync:

              $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
              This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

              See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

              "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
              1. Download the file /etc/hostname to your local computer.

              2. Upload a file to a subdirectory of your personal $VSC_DATA space.

              3. Create a file named hello.txt and edit it using nano.

              Now you have a basic understanding, see next chapter for some more in depth concepts.

              "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

              In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

              This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

              If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

              For software installation requests, please use the request form.

              "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

              donphan is the new debug/interactive cluster.

              It replaces slaking, which will be retired on Monday 22 May 2023.

              It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

              This cluster consists of 12 workernodes, each with:

              • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
              • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
              • ~738 GiB of RAM memory;
              • 1.6TB NVME local disk;
              • HDR-100 InfiniBand interconnect;
              • RHEL8 as operating system;

              To start using this cluster from a terminal session, first run:

              module swap cluster/donphan\n

              You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

              "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

              The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

              Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

              "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

              The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

              "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

              By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

              The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

              The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

              "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

              Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

              This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

              Warning

              Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

              There are no strong security guarantees regarding data protection when using this shared GPU!

              "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

              gallade is the new large-memory cluster.

              It replaces kirlia, which will be retired on Monday 22 May 2023.

              This cluster consists of 12 workernodes, each with:

              • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
              • ~940 GiB of RAM memory;
              • 1.5TB NVME local disk;
              • HDR-100 InfiniBand interconnect;
              • RHEL8 as operating system;

              To start using this cluster from a terminal session, first run:

              module swap cluster/gallade\n

              You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

              "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

              The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

              As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

              Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

              "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

              Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

              It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

              "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

              In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

              This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

              If you have any questions on using shinx, you can contact the HPC-UGent team.

              For software installation requests, please use the request form.

              "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

              shinx is a new CPU-only cluster.

              It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

              It is primarily for regular CPU compute use.

              This cluster consists of 48 workernodes, each with:

              • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
              • ~360 GiB of RAM memory;
              • 400GB local disk;
              • NDR-200 InfiniBand interconnect;
              • RHEL9 as operating system;

              To start using this cluster from a terminal session, first run:

              module swap cluster/shinx\n

              You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

              "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

              The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

              Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

              "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

              The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

              The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

              "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

              As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

              Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

              "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
              • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
              "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

              As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

              module swap cluster/.shinx\n

              Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

              As such, we will have an extended pilot phase in 3 stages:

              "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
              • Minimal cluster to test software and nodes

                • Only 2 or 3 nodes available
                • FDR or EDR infiniband network
                • EL8 OS
              • Retirement of swalot cluster (as of 01 November 2023)

              • Racking of stage 1 nodes
              "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
              • 2/3 cluster size

                • 32 nodes (with max job size of 16 nodes)
                • EDR Infiniband
                • EL8 OS
              • Retirement of victini (as of 05 February 2023)

              • Racking of last 16 nodes
              • Installation of NDR/NDR-200 infiniband network
              "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
              • Full size cluster

                • 48 nodes (no job size limit)
                • NDR-200 Infiniband (single switch Infiniband topology)
                • EL9 OS
              • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

              "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
              • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
              "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

              For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

              module swap env/software/doduo\n

              We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

              "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

              This table gives an overview of all the available software on the different clusters.

              "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ABAQUS, load one of these modules using a module load command like:

              module load ABAQUS/2023\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ABINIT, load one of these modules using a module load command like:

              module load ABINIT/9.10.3-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ABRA2, load one of these modules using a module load command like:

              module load ABRA2/2.23-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ABRicate, load one of these modules using a module load command like:

              module load ABRicate/0.9.9-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ABySS, load one of these modules using a module load command like:

              module load ABySS/2.3.7-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ACTC, load one of these modules using a module load command like:

              module load ACTC/1.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ADMIXTURE, load one of these modules using a module load command like:

              module load ADMIXTURE/1.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AICSImageIO, load one of these modules using a module load command like:

              module load AICSImageIO/4.14.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AMAPVox, load one of these modules using a module load command like:

              module load AMAPVox/1.9.4-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AMICA, load one of these modules using a module load command like:

              module load AMICA/2024.1.19-intel-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AMOS, load one of these modules using a module load command like:

              module load AMOS/3.1.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AMPtk, load one of these modules using a module load command like:

              module load AMPtk/1.5.4-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ANTLR, load one of these modules using a module load command like:

              module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ANTs, load one of these modules using a module load command like:

              module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

              The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using APR-util, load one of these modules using a module load command like:

              module load APR-util/1.6.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using APR, load one of these modules using a module load command like:

              module load APR/1.7.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ARAGORN, load one of these modules using a module load command like:

              module load ARAGORN/1.2.41-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ASCAT, load one of these modules using a module load command like:

              module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ASE, load one of these modules using a module load command like:

              module load ASE/3.22.1-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ATK, load one of these modules using a module load command like:

              module load ATK/2.38.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AUGUSTUS, load one of these modules using a module load command like:

              module load AUGUSTUS/3.4.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Abseil, load one of these modules using a module load command like:

              module load Abseil/20230125.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AdapterRemoval, load one of these modules using a module load command like:

              module load AdapterRemoval/2.3.3-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Albumentations, load one of these modules using a module load command like:

              module load Albumentations/1.1.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AlphaFold, load one of these modules using a module load command like:

              module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AlphaPulldown, load one of these modules using a module load command like:

              module load AlphaPulldown/0.30.7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Altair-EDEM, load one of these modules using a module load command like:

              module load Altair-EDEM/2021.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Amber, load one of these modules using a module load command like:

              module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AmberMini, load one of these modules using a module load command like:

              module load AmberMini/16.16.0-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AmberTools, load one of these modules using a module load command like:

              module load AmberTools/20-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Anaconda3, load one of these modules using a module load command like:

              module load Anaconda3/2023.03-1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Annocript, load one of these modules using a module load command like:

              module load Annocript/2.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ArchR, load one of these modules using a module load command like:

              module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Archive-Zip, load one of these modules using a module load command like:

              module load Archive-Zip/1.68-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Arlequin, load one of these modules using a module load command like:

              module load Arlequin/3.5.2.2-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Armadillo, load one of these modules using a module load command like:

              module load Armadillo/12.6.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Arrow, load one of these modules using a module load command like:

              module load Arrow/14.0.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ArviZ, load one of these modules using a module load command like:

              module load ArviZ/0.16.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Aspera-CLI, load one of these modules using a module load command like:

              module load Aspera-CLI/3.9.6.1467.159c5b1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AutoDock-Vina, load one of these modules using a module load command like:

              module load AutoDock-Vina/1.2.3-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AutoGeneS, load one of these modules using a module load command like:

              module load AutoGeneS/1.0.4-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using AutoMap, load one of these modules using a module load command like:

              module load AutoMap/1.0-foss-2019b-20200324\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Autoconf, load one of these modules using a module load command like:

              module load Autoconf/2.71-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Automake, load one of these modules using a module load command like:

              module load Automake/1.16.5-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Autotools, load one of these modules using a module load command like:

              module load Autotools/20220317-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Avogadro2, load one of these modules using a module load command like:

              module load Avogadro2/1.97.0-linux-x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BAMSurgeon, load one of these modules using a module load command like:

              module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BBMap, load one of these modules using a module load command like:

              module load BBMap/39.01-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BCFtools, load one of these modules using a module load command like:

              module load BCFtools/1.18-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BDBag, load one of these modules using a module load command like:

              module load BDBag/1.6.3-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BEDOPS, load one of these modules using a module load command like:

              module load BEDOPS/2.4.41-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BEDTools, load one of these modules using a module load command like:

              module load BEDTools/2.31.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BLAST+, load one of these modules using a module load command like:

              module load BLAST+/2.14.1-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BLAT, load one of these modules using a module load command like:

              module load BLAT/3.7-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BLIS, load one of these modules using a module load command like:

              module load BLIS/0.9.0-GCC-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BRAKER, load one of these modules using a module load command like:

              module load BRAKER/2.1.6-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BSMAPz, load one of these modules using a module load command like:

              module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BSseeker2, load one of these modules using a module load command like:

              module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BUSCO, load one of these modules using a module load command like:

              module load BUSCO/5.4.3-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BUStools, load one of these modules using a module load command like:

              module load BUStools/0.43.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BWA, load one of these modules using a module load command like:

              module load BWA/0.7.17-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BamTools, load one of these modules using a module load command like:

              module load BamTools/2.5.2-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bambi, load one of these modules using a module load command like:

              module load Bambi/0.7.1-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bandage, load one of these modules using a module load command like:

              module load Bandage/0.9.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BatMeth2, load one of these modules using a module load command like:

              module load BatMeth2/2.1-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BayeScEnv, load one of these modules using a module load command like:

              module load BayeScEnv/1.1-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BayeScan, load one of these modules using a module load command like:

              module load BayeScan/2.1-intel-compilers-2021.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BayesAss3-SNPs, load one of these modules using a module load command like:

              module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BayesPrism, load one of these modules using a module load command like:

              module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bazel, load one of these modules using a module load command like:

              module load Bazel/6.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Beast, load one of these modules using a module load command like:

              module load Beast/2.7.3-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BeautifulSoup, load one of these modules using a module load command like:

              module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BerkeleyGW, load one of these modules using a module load command like:

              module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BiG-SCAPE, load one of these modules using a module load command like:

              module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BigDFT, load one of these modules using a module load command like:

              module load BigDFT/1.9.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BinSanity, load one of these modules using a module load command like:

              module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bio-DB-HTS, load one of these modules using a module load command like:

              module load Bio-DB-HTS/3.01-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bio-EUtilities, load one of these modules using a module load command like:

              module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

              module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using BioPerl, load one of these modules using a module load command like:

              module load BioPerl/1.7.8-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Biopython, load one of these modules using a module load command like:

              module load Biopython/1.83-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bismark, load one of these modules using a module load command like:

              module load Bismark/0.23.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bison, load one of these modules using a module load command like:

              module load Bison/3.8.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Blender, load one of these modules using a module load command like:

              module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Block, load one of these modules using a module load command like:

              module load Block/1.5.3-20200525-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Blosc, load one of these modules using a module load command like:

              module load Blosc/1.21.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Blosc2, load one of these modules using a module load command like:

              module load Blosc2/2.6.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bonito, load one of these modules using a module load command like:

              module load Bonito/0.4.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bonnie++, load one of these modules using a module load command like:

              module load Bonnie++/2.00a-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Boost.MPI, load one of these modules using a module load command like:

              module load Boost.MPI/1.81.0-gompi-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Boost.Python-NumPy, load one of these modules using a module load command like:

              module load Boost.Python-NumPy/1.79.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Boost.Python, load one of these modules using a module load command like:

              module load Boost.Python/1.79.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Boost, load one of these modules using a module load command like:

              module load Boost/1.82.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bottleneck, load one of these modules using a module load command like:

              module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bowtie, load one of these modules using a module load command like:

              module load Bowtie/1.3.1-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bowtie2, load one of these modules using a module load command like:

              module load Bowtie2/2.4.5-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Bracken, load one of these modules using a module load command like:

              module load Bracken/2.9-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Brotli-python, load one of these modules using a module load command like:

              module load Brotli-python/1.0.9-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Brotli, load one of these modules using a module load command like:

              module load Brotli/1.1.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Brunsli, load one of these modules using a module load command like:

              module load Brunsli/0.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CASPR, load one of these modules using a module load command like:

              module load CASPR/20200730-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CCL, load one of these modules using a module load command like:

              module load CCL/1.12.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CD-HIT, load one of these modules using a module load command like:

              module load CD-HIT/4.8.1-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CDAT, load one of these modules using a module load command like:

              module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CDBtools, load one of these modules using a module load command like:

              module load CDBtools/0.99-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CDO, load one of these modules using a module load command like:

              module load CDO/2.0.5-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CENSO, load one of these modules using a module load command like:

              module load CENSO/1.2.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CESM-deps, load one of these modules using a module load command like:

              module load CESM-deps/2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CFDEMcoupling, load one of these modules using a module load command like:

              module load CFDEMcoupling/3.8.0-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CFITSIO, load one of these modules using a module load command like:

              module load CFITSIO/4.3.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CGAL, load one of these modules using a module load command like:

              module load CGAL/5.6-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CGmapTools, load one of these modules using a module load command like:

              module load CGmapTools/0.1.2-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CIRCexplorer2, load one of these modules using a module load command like:

              module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CIRI-long, load one of these modules using a module load command like:

              module load CIRI-long/1.0.2-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CIRIquant, load one of these modules using a module load command like:

              module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CITE-seq-Count, load one of these modules using a module load command like:

              module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CLEAR, load one of these modules using a module load command like:

              module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CLHEP, load one of these modules using a module load command like:

              module load CLHEP/2.4.6.4-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CMAverse, load one of these modules using a module load command like:

              module load CMAverse/20220112-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CMSeq, load one of these modules using a module load command like:

              module load CMSeq/1.0.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CMake, load one of these modules using a module load command like:

              module load CMake/3.27.6-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using COLMAP, load one of these modules using a module load command like:

              module load COLMAP/3.8-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CONCOCT, load one of these modules using a module load command like:

              module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CP2K, load one of these modules using a module load command like:

              module load CP2K/2023.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CPC2, load one of these modules using a module load command like:

              module load CPC2/1.0.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CPLEX, load one of these modules using a module load command like:

              module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CPPE, load one of these modules using a module load command like:

              module load CPPE/0.3.1-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CREST, load one of these modules using a module load command like:

              module load CREST/2.12-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CRISPR-DAV, load one of these modules using a module load command like:

              module load CRISPR-DAV/2.3.4-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CRISPResso2, load one of these modules using a module load command like:

              module load CRISPResso2/2.2.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CRYSTAL17, load one of these modules using a module load command like:

              module load CRYSTAL17/1.0.2-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CSBDeep, load one of these modules using a module load command like:

              module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CUDA, load one of these modules using a module load command like:

              module load CUDA/12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CUDAcore, load one of these modules using a module load command like:

              module load CUDAcore/11.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CUnit, load one of these modules using a module load command like:

              module load CUnit/2.1-3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CVXOPT, load one of these modules using a module load command like:

              module load CVXOPT/1.3.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Calib, load one of these modules using a module load command like:

              module load Calib/0.3.4-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cantera, load one of these modules using a module load command like:

              module load Cantera/3.0.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CapnProto, load one of these modules using a module load command like:

              module load CapnProto/1.0.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cartopy, load one of these modules using a module load command like:

              module load Cartopy/0.22.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Casanovo, load one of these modules using a module load command like:

              module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CatBoost, load one of these modules using a module load command like:

              module load CatBoost/1.2-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CatLearn, load one of these modules using a module load command like:

              module load CatLearn/0.6.2-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CatMAP, load one of these modules using a module load command like:

              module load CatMAP/20220519-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Catch2, load one of these modules using a module load command like:

              module load Catch2/2.13.9-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cbc, load one of these modules using a module load command like:

              module load Cbc/2.10.11-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellBender, load one of these modules using a module load command like:

              module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellOracle, load one of these modules using a module load command like:

              module load CellOracle/0.12.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellProfiler, load one of these modules using a module load command like:

              module load CellProfiler/4.2.4-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellRanger-ATAC, load one of these modules using a module load command like:

              module load CellRanger-ATAC/2.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellRanger, load one of these modules using a module load command like:

              module load CellRanger/7.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellRank, load one of these modules using a module load command like:

              module load CellRank/2.0.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CellTypist, load one of these modules using a module load command like:

              module load CellTypist/1.6.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cellpose, load one of these modules using a module load command like:

              module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Centrifuge, load one of these modules using a module load command like:

              module load Centrifuge/1.0.4-beta-gompi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cereal, load one of these modules using a module load command like:

              module load Cereal/1.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ceres-Solver, load one of these modules using a module load command like:

              module load Ceres-Solver/2.2.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cgl, load one of these modules using a module load command like:

              module load Cgl/0.60.8-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CharLS, load one of these modules using a module load command like:

              module load CharLS/2.4.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CheMPS2, load one of these modules using a module load command like:

              module load CheMPS2/1.8.12-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Check, load one of these modules using a module load command like:

              module load Check/0.15.2-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CheckM, load one of these modules using a module load command like:

              module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Chimera, load one of these modules using a module load command like:

              module load Chimera/1.16-linux_x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Circlator, load one of these modules using a module load command like:

              module load Circlator/1.5.5-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Circuitscape, load one of these modules using a module load command like:

              module load Circuitscape/5.12.3-Julia-1.7.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Clair3, load one of these modules using a module load command like:

              module load Clair3/1.0.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Clang, load one of these modules using a module load command like:

              module load Clang/16.0.6-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Clp, load one of these modules using a module load command like:

              module load Clp/1.17.9-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Clustal-Omega, load one of these modules using a module load command like:

              module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ClustalW2, load one of these modules using a module load command like:

              module load ClustalW2/2.1-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CmdStanR, load one of these modules using a module load command like:

              module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CodAn, load one of these modules using a module load command like:

              module load CodAn/1.2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CoinUtils, load one of these modules using a module load command like:

              module load CoinUtils/2.11.10-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ColabFold, load one of these modules using a module load command like:

              module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CompareM, load one of these modules using a module load command like:

              module load CompareM/0.1.2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

              module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Concorde, load one of these modules using a module load command like:

              module load Concorde/20031219-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CoordgenLibs, load one of these modules using a module load command like:

              module load CoordgenLibs/3.0.1-iimpi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CopyKAT, load one of these modules using a module load command like:

              module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Coreutils, load one of these modules using a module load command like:

              module load Coreutils/8.32-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CppUnit, load one of these modules using a module load command like:

              module load CppUnit/1.15.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using CuPy, load one of these modules using a module load command like:

              module load CuPy/8.5.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cufflinks, load one of these modules using a module load command like:

              module load Cufflinks/20190706-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Cython, load one of these modules using a module load command like:

              module load Cython/3.0.8-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DALI, load one of these modules using a module load command like:

              module load DALI/2.1.2-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DAS_Tool, load one of these modules using a module load command like:

              module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DB, load one of these modules using a module load command like:

              module load DB/18.1.40-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DBD-mysql, load one of these modules using a module load command like:

              module load DBD-mysql/4.050-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DBG2OLC, load one of these modules using a module load command like:

              module load DBG2OLC/20200724-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DB_File, load one of these modules using a module load command like:

              module load DB_File/1.858-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DBus, load one of these modules using a module load command like:

              module load DBus/1.15.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DETONATE, load one of these modules using a module load command like:

              module load DETONATE/1.11-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DFT-D3, load one of these modules using a module load command like:

              module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DIA-NN, load one of these modules using a module load command like:

              module load DIA-NN/1.8.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DIALOGUE, load one of these modules using a module load command like:

              module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DIAMOND, load one of these modules using a module load command like:

              module load DIAMOND/2.1.8-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DIANA, load one of these modules using a module load command like:

              module load DIANA/10.5\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DIRAC, load one of these modules using a module load command like:

              module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DL_POLY_Classic, load one of these modules using a module load command like:

              module load DL_POLY_Classic/1.10-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DMCfun, load one of these modules using a module load command like:

              module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DOLFIN, load one of these modules using a module load command like:

              module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DRAGMAP, load one of these modules using a module load command like:

              module load DRAGMAP/1.3.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DROP, load one of these modules using a module load command like:

              module load DROP/1.1.0-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DUBStepR, load one of these modules using a module load command like:

              module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Dakota, load one of these modules using a module load command like:

              module load Dakota/6.16.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Dalton, load one of these modules using a module load command like:

              module load Dalton/2020.0-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DeepLoc, load one of these modules using a module load command like:

              module load DeepLoc/2.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Delly, load one of these modules using a module load command like:

              module load Delly/0.8.7-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DendroPy, load one of these modules using a module load command like:

              module load DendroPy/4.6.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DensPart, load one of these modules using a module load command like:

              module load DensPart/20220603-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Deprecated, load one of these modules using a module load command like:

              module load Deprecated/1.2.13-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DiCE-ML, load one of these modules using a module load command like:

              module load DiCE-ML/0.9-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Dice, load one of these modules using a module load command like:

              module load Dice/20240101-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DoubletFinder, load one of these modules using a module load command like:

              module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Doxygen, load one of these modules using a module load command like:

              module load Doxygen/1.9.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Dsuite, load one of these modules using a module load command like:

              module load Dsuite/20210718-intel-compilers-2021.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DualSPHysics, load one of these modules using a module load command like:

              module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using DyMat, load one of these modules using a module load command like:

              module load DyMat/0.7-foss-2021b-2020-12-12\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

              The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using EDirect, load one of these modules using a module load command like:

              module load EDirect/20.5.20231006-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ELPA, load one of these modules using a module load command like:

              module load ELPA/2021.05.001-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using EMBOSS, load one of these modules using a module load command like:

              module load EMBOSS/6.6.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ESM-2, load one of these modules using a module load command like:

              module load ESM-2/2.0.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ESMF, load one of these modules using a module load command like:

              module load ESMF/8.2.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ESMPy, load one of these modules using a module load command like:

              module load ESMPy/8.0.1-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ETE, load one of these modules using a module load command like:

              module load ETE/3.1.3-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

              The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using EUKulele, load one of these modules using a module load command like:

              module load EUKulele/2.0.6-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

              The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using EasyBuild, load one of these modules using a module load command like:

              module load EasyBuild/4.9.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Eigen, load one of these modules using a module load command like:

              module load Eigen/3.4.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Elk, load one of these modules using a module load command like:

              module load Elk/7.0.12-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using EpiSCORE, load one of these modules using a module load command like:

              module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

              module load Excel-Writer-XLSX/1.09-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Exonerate, load one of these modules using a module load command like:

              module load Exonerate/2.4.0-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ExtremeLy, load one of these modules using a module load command like:

              module load ExtremeLy/2.3.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FALCON, load one of these modules using a module load command like:

              module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FASTA, load one of these modules using a module load command like:

              module load FASTA/36.3.8i-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FASTX-Toolkit, load one of these modules using a module load command like:

              module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FDS, load one of these modules using a module load command like:

              module load FDS/6.8.0-intel-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FEniCS, load one of these modules using a module load command like:

              module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FFAVES, load one of these modules using a module load command like:

              module load FFAVES/2022.11.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FFC, load one of these modules using a module load command like:

              module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FFTW.MPI, load one of these modules using a module load command like:

              module load FFTW.MPI/3.3.10-gompi-2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FFTW, load one of these modules using a module load command like:

              module load FFTW/3.3.10-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FFmpeg, load one of these modules using a module load command like:

              module load FFmpeg/6.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FIAT, load one of these modules using a module load command like:

              module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FIGARO, load one of these modules using a module load command like:

              module load FIGARO/1.1.2-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLAC, load one of these modules using a module load command like:

              module load FLAC/1.4.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLAIR, load one of these modules using a module load command like:

              module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLANN, load one of these modules using a module load command like:

              module load FLANN/1.9.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLASH, load one of these modules using a module load command like:

              module load FLASH/2.2.00-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLTK, load one of these modules using a module load command like:

              module load FLTK/1.3.5-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FLUENT, load one of these modules using a module load command like:

              module load FLUENT/2023R1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FMM3D, load one of these modules using a module load command like:

              module load FMM3D/20211018-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FMPy, load one of these modules using a module load command like:

              module load FMPy/0.3.2-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FSL, load one of these modules using a module load command like:

              module load FSL/6.0.7.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FabIO, load one of these modules using a module load command like:

              module load FabIO/0.11.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Faiss, load one of these modules using a module load command like:

              module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastANI, load one of these modules using a module load command like:

              module load FastANI/1.34-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastME, load one of these modules using a module load command like:

              module load FastME/2.1.6.3-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastQC, load one of these modules using a module load command like:

              module load FastQC/0.11.9-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastQ_Screen, load one of these modules using a module load command like:

              module load FastQ_Screen/0.14.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastTree, load one of these modules using a module load command like:

              module load FastTree/2.1.11-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FastViromeExplorer, load one of these modules using a module load command like:

              module load FastViromeExplorer/20180422-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Fastaq, load one of these modules using a module load command like:

              module load Fastaq/3.17.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Fiji, load one of these modules using a module load command like:

              module load Fiji/2.9.0-Java-1.8\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Filtlong, load one of these modules using a module load command like:

              module load Filtlong/0.2.0-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Fiona, load one of these modules using a module load command like:

              module load Fiona/1.9.5-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Flask, load one of these modules using a module load command like:

              module load Flask/2.2.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FlexiBLAS, load one of these modules using a module load command like:

              module load FlexiBLAS/3.3.1-GCC-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Flye, load one of these modules using a module load command like:

              module load Flye/2.9.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FragGeneScan, load one of these modules using a module load command like:

              module load FragGeneScan/1.31-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FreeBarcodes, load one of these modules using a module load command like:

              module load FreeBarcodes/3.0.a5-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FreeFEM, load one of these modules using a module load command like:

              module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FreeImage, load one of these modules using a module load command like:

              module load FreeImage/3.18.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FreeSurfer, load one of these modules using a module load command like:

              module load FreeSurfer/7.3.2-centos8_x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FreeXL, load one of these modules using a module load command like:

              module load FreeXL/1.0.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FriBidi, load one of these modules using a module load command like:

              module load FriBidi/1.0.12-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FuSeq, load one of these modules using a module load command like:

              module load FuSeq/1.1.2-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

              The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using FusionCatcher, load one of these modules using a module load command like:

              module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GAPPadder, load one of these modules using a module load command like:

              module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GATB-Core, load one of these modules using a module load command like:

              module load GATB-Core/1.4.2-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GATE, load one of these modules using a module load command like:

              module load GATE/9.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GATK, load one of these modules using a module load command like:

              module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GBprocesS, load one of these modules using a module load command like:

              module load GBprocesS/4.0.0.post1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GCC, load one of these modules using a module load command like:

              module load GCC/13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GCCcore, load one of these modules using a module load command like:

              module load GCCcore/13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GConf, load one of these modules using a module load command like:

              module load GConf/3.2.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GDAL, load one of these modules using a module load command like:

              module load GDAL/3.7.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GDB, load one of these modules using a module load command like:

              module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GDCM, load one of these modules using a module load command like:

              module load GDCM/3.0.21-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GDGraph, load one of these modules using a module load command like:

              module load GDGraph/1.56-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GDRCopy, load one of these modules using a module load command like:

              module load GDRCopy/2.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GEGL, load one of these modules using a module load command like:

              module load GEGL/0.4.30-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GEOS, load one of these modules using a module load command like:

              module load GEOS/3.12.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GFF3-toolkit, load one of these modules using a module load command like:

              module load GFF3-toolkit/2.1.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GIMP, load one of these modules using a module load command like:

              module load GIMP/2.10.24-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GL2PS, load one of these modules using a module load command like:

              module load GL2PS/1.4.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLFW, load one of these modules using a module load command like:

              module load GLFW/3.3.8-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLIMPSE, load one of these modules using a module load command like:

              module load GLIMPSE/2.0.0-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLM, load one of these modules using a module load command like:

              module load GLM/0.9.9.8-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLPK, load one of these modules using a module load command like:

              module load GLPK/5.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLib, load one of these modules using a module load command like:

              module load GLib/2.77.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GLibmm, load one of these modules using a module load command like:

              module load GLibmm/2.66.4-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GMAP-GSNAP, load one of these modules using a module load command like:

              module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GMP, load one of these modules using a module load command like:

              module load GMP/6.2.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GOATOOLS, load one of these modules using a module load command like:

              module load GOATOOLS/1.3.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GObject-Introspection, load one of these modules using a module load command like:

              module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GPAW-setups, load one of these modules using a module load command like:

              module load GPAW-setups/0.9.20000\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GPAW, load one of these modules using a module load command like:

              module load GPAW/22.8.0-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GPy, load one of these modules using a module load command like:

              module load GPy/1.10.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GPyOpt, load one of these modules using a module load command like:

              module load GPyOpt/1.2.6-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GPyTorch, load one of these modules using a module load command like:

              module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GRASP-suite, load one of these modules using a module load command like:

              module load GRASP-suite/2023-05-09-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GRASS, load one of these modules using a module load command like:

              module load GRASS/8.2.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GROMACS, load one of these modules using a module load command like:

              module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GSL, load one of these modules using a module load command like:

              module load GSL/2.7-intel-compilers-2021.4.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GST-plugins-bad, load one of these modules using a module load command like:

              module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GST-plugins-base, load one of these modules using a module load command like:

              module load GST-plugins-base/1.20.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GStreamer, load one of these modules using a module load command like:

              module load GStreamer/1.20.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTDB-Tk, load one of these modules using a module load command like:

              module load GTDB-Tk/2.3.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTK+, load one of these modules using a module load command like:

              module load GTK+/3.24.23-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTK2, load one of these modules using a module load command like:

              module load GTK2/2.24.33-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTK3, load one of these modules using a module load command like:

              module load GTK3/3.24.37-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTK4, load one of these modules using a module load command like:

              module load GTK4/4.7.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GTS, load one of these modules using a module load command like:

              module load GTS/0.7.6-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GUSHR, load one of these modules using a module load command like:

              module load GUSHR/2020-09-28-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GapFiller, load one of these modules using a module load command like:

              module load GapFiller/2.1.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gaussian, load one of these modules using a module load command like:

              module load Gaussian/g16_C.01-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gblocks, load one of these modules using a module load command like:

              module load Gblocks/0.91b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gdk-Pixbuf, load one of these modules using a module load command like:

              module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Geant4, load one of these modules using a module load command like:

              module load Geant4/11.0.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GeneMark-ET, load one of these modules using a module load command like:

              module load GeneMark-ET/4.71-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GenomeThreader, load one of these modules using a module load command like:

              module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GenomeWorks, load one of these modules using a module load command like:

              module load GenomeWorks/2021.02.2-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gerris, load one of these modules using a module load command like:

              module load Gerris/20131206-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GetOrganelle, load one of these modules using a module load command like:

              module load GetOrganelle/1.7.5.3-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GffCompare, load one of these modules using a module load command like:

              module load GffCompare/0.12.6-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ghostscript, load one of these modules using a module load command like:

              module load Ghostscript/10.01.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GimmeMotifs, load one of these modules using a module load command like:

              module load GimmeMotifs/0.17.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Giotto-Suite, load one of these modules using a module load command like:

              module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GitPython, load one of these modules using a module load command like:

              module load GitPython/3.1.40-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GlimmerHMM, load one of these modules using a module load command like:

              module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GlobalArrays, load one of these modules using a module load command like:

              module load GlobalArrays/5.8-iomkl-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GnuTLS, load one of these modules using a module load command like:

              module load GnuTLS/3.7.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Go, load one of these modules using a module load command like:

              module load Go/1.21.6\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gradle, load one of these modules using a module load command like:

              module load Gradle/8.6-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GraphMap, load one of these modules using a module load command like:

              module load GraphMap/0.5.2-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GraphMap2, load one of these modules using a module load command like:

              module load GraphMap2/0.6.4-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Graphene, load one of these modules using a module load command like:

              module load Graphene/1.10.8-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GraphicsMagick, load one of these modules using a module load command like:

              module load GraphicsMagick/1.3.34-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Graphviz, load one of these modules using a module load command like:

              module load Graphviz/8.1.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Greenlet, load one of these modules using a module load command like:

              module load Greenlet/2.0.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using GroIMP, load one of these modules using a module load command like:

              module load GroIMP/1.5-Java-1.8\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Guile, load one of these modules using a module load command like:

              module load Guile/3.0.7-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Guppy, load one of these modules using a module load command like:

              module load Guppy/6.5.7-gpu\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Gurobi, load one of these modules using a module load command like:

              module load Gurobi/11.0.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HAL, load one of these modules using a module load command like:

              module load HAL/2.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HDBSCAN, load one of these modules using a module load command like:

              module load HDBSCAN/0.8.29-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HDDM, load one of these modules using a module load command like:

              module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HDF, load one of these modules using a module load command like:

              module load HDF/4.2.16-2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HDF5, load one of these modules using a module load command like:

              module load HDF5/1.14.0-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HH-suite, load one of these modules using a module load command like:

              module load HH-suite/3.3.0-gompic-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HISAT2, load one of these modules using a module load command like:

              module load HISAT2/2.2.1-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HMMER, load one of these modules using a module load command like:

              module load HMMER/3.4-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HMMER2, load one of these modules using a module load command like:

              module load HMMER2/2.3.2-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HPL, load one of these modules using a module load command like:

              module load HPL/2.3-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HTSeq, load one of these modules using a module load command like:

              module load HTSeq/2.0.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HTSlib, load one of these modules using a module load command like:

              module load HTSlib/1.18-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HTSplotter, load one of these modules using a module load command like:

              module load HTSplotter/2.11-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Hadoop, load one of these modules using a module load command like:

              module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HarfBuzz, load one of these modules using a module load command like:

              module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HiCExplorer, load one of these modules using a module load command like:

              module load HiCExplorer/3.7.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HiCMatrix, load one of these modules using a module load command like:

              module load HiCMatrix/17-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HighFive, load one of these modules using a module load command like:

              module load HighFive/2.7.1-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Highway, load one of these modules using a module load command like:

              module load Highway/1.0.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Horovod, load one of these modules using a module load command like:

              module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using HyPo, load one of these modules using a module load command like:

              module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Hybpiper, load one of these modules using a module load command like:

              module load Hybpiper/2.1.6-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Hydra, load one of these modules using a module load command like:

              module load Hydra/1.1.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Hyperopt, load one of these modules using a module load command like:

              module load Hyperopt/0.2.7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Hypre, load one of these modules using a module load command like:

              module load Hypre/2.25.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ICU, load one of these modules using a module load command like:

              module load ICU/73.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IDBA-UD, load one of these modules using a module load command like:

              module load IDBA-UD/1.1.3-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IGMPlot, load one of these modules using a module load command like:

              module load IGMPlot/2.4.2-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IGV, load one of these modules using a module load command like:

              module load IGV/2.9.4-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IOR, load one of these modules using a module load command like:

              module load IOR/3.2.1-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IPython, load one of these modules using a module load command like:

              module load IPython/8.14.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IQ-TREE, load one of these modules using a module load command like:

              module load IQ-TREE/2.2.2.6-gompi-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IRkernel, load one of these modules using a module load command like:

              module load IRkernel/1.2-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ISA-L, load one of these modules using a module load command like:

              module load ISA-L/2.30.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ITK, load one of these modules using a module load command like:

              module load ITK/5.2.1-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ImageMagick, load one of these modules using a module load command like:

              module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Imath, load one of these modules using a module load command like:

              module load Imath/3.1.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Inferelator, load one of these modules using a module load command like:

              module load Inferelator/0.6.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Infernal, load one of these modules using a module load command like:

              module load Infernal/1.1.4-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using InterProScan, load one of these modules using a module load command like:

              module load InterProScan/5.62-94.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IonQuant, load one of these modules using a module load command like:

              module load IonQuant/1.10.12-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IsoQuant, load one of these modules using a module load command like:

              module load IsoQuant/3.3.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using IsoSeq, load one of these modules using a module load command like:

              module load IsoSeq/4.0.0-linux-x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JAGS, load one of these modules using a module load command like:

              module load JAGS/4.3.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JSON-GLib, load one of these modules using a module load command like:

              module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Jansson, load one of these modules using a module load command like:

              module load Jansson/2.13.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JasPer, load one of these modules using a module load command like:

              module load JasPer/4.0.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Java, load one of these modules using a module load command like:

              module load Java/17.0.6\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Jellyfish, load one of these modules using a module load command like:

              module load Jellyfish/2.3.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JsonCpp, load one of these modules using a module load command like:

              module load JsonCpp/1.9.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Judy, load one of these modules using a module load command like:

              module load Judy/1.0.5-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Julia, load one of these modules using a module load command like:

              module load Julia/1.9.3-linux-x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JupyterHub, load one of these modules using a module load command like:

              module load JupyterHub/4.0.1-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JupyterLab, load one of these modules using a module load command like:

              module load JupyterLab/4.0.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

              The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using JupyterNotebook, load one of these modules using a module load command like:

              module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using KMC, load one of these modules using a module load command like:

              module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using KaHIP, load one of these modules using a module load command like:

              module load KaHIP/3.14-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Kaleido, load one of these modules using a module load command like:

              module load Kaleido/0.1.0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Kalign, load one of these modules using a module load command like:

              module load Kalign/3.3.5-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Kent_tools, load one of these modules using a module load command like:

              module load Kent_tools/20190326-linux.x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Keras, load one of these modules using a module load command like:

              module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

              The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using KerasTuner, load one of these modules using a module load command like:

              module load KerasTuner/1.3.5-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Kraken, load one of these modules using a module load command like:

              module load Kraken/1.1.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Kraken2, load one of these modules using a module load command like:

              module load Kraken2/2.1.2-gompi-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using KrakenUniq, load one of these modules using a module load command like:

              module load KrakenUniq/1.0.3-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using KronaTools, load one of these modules using a module load command like:

              module load KronaTools/2.8.1-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LAME, load one of these modules using a module load command like:

              module load LAME/3.100-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LAMMPS, load one of these modules using a module load command like:

              module load LAMMPS/patch_20Nov2019-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LAST, load one of these modules using a module load command like:

              module load LAST/1179-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LASTZ, load one of these modules using a module load command like:

              module load LASTZ/1.04.22-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LDC, load one of these modules using a module load command like:

              module load LDC/1.30.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LERC, load one of these modules using a module load command like:

              module load LERC/4.0.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LIANA+, load one of these modules using a module load command like:

              module load LIANA+/1.0.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LIBSVM, load one of these modules using a module load command like:

              module load LIBSVM/3.30-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LLVM, load one of these modules using a module load command like:

              module load LLVM/16.0.6-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LMDB, load one of these modules using a module load command like:

              module load LMDB/0.9.31-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LMfit, load one of these modules using a module load command like:

              module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LPJmL, load one of these modules using a module load command like:

              module load LPJmL/4.0.003-iimpi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LPeg, load one of these modules using a module load command like:

              module load LPeg/1.0.2-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LSD2, load one of these modules using a module load command like:

              module load LSD2/2.4.1-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LUMPY, load one of these modules using a module load command like:

              module load LUMPY/0.3.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LZO, load one of these modules using a module load command like:

              module load LZO/2.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using L_RNA_scaffolder, load one of these modules using a module load command like:

              module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Lace, load one of these modules using a module load command like:

              module load Lace/1.14.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LevelDB, load one of these modules using a module load command like:

              module load LevelDB/1.22-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Levenshtein, load one of these modules using a module load command like:

              module load Levenshtein/0.24.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LiBis, load one of these modules using a module load command like:

              module load LiBis/20200428-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LibLZF, load one of these modules using a module load command like:

              module load LibLZF/3.6-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LibSoup, load one of these modules using a module load command like:

              module load LibSoup/3.0.7-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LibTIFF, load one of these modules using a module load command like:

              module load LibTIFF/4.6.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Libint, load one of these modules using a module load command like:

              module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Lighter, load one of these modules using a module load command like:

              module load Lighter/1.1.2-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LittleCMS, load one of these modules using a module load command like:

              module load LittleCMS/2.15-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LncLOOM, load one of these modules using a module load command like:

              module load LncLOOM/2.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LoRDEC, load one of these modules using a module load command like:

              module load LoRDEC/0.9-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Longshot, load one of these modules using a module load command like:

              module load Longshot/0.4.5-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

              The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using LtrDetector, load one of these modules using a module load command like:

              module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Lua, load one of these modules using a module load command like:

              module load Lua/5.4.6-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using M1QN3, load one of these modules using a module load command like:

              module load M1QN3/3.3-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using M4, load one of these modules using a module load command like:

              module load M4/1.4.19-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MACS2, load one of these modules using a module load command like:

              module load MACS2/2.2.7.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MACS3, load one of these modules using a module load command like:

              module load MACS3/3.0.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MAFFT, load one of these modules using a module load command like:

              module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MAGeCK, load one of these modules using a module load command like:

              module load MAGeCK/0.5.9.5-gfbf-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MARS, load one of these modules using a module load command like:

              module load MARS/20191101-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MATIO, load one of these modules using a module load command like:

              module load MATIO/1.5.17-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MATLAB, load one of these modules using a module load command like:

              module load MATLAB/2022b-r5\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MBROLA, load one of these modules using a module load command like:

              module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MCL, load one of these modules using a module load command like:

              module load MCL/22.282-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MDAnalysis, load one of these modules using a module load command like:

              module load MDAnalysis/2.4.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MDTraj, load one of these modules using a module load command like:

              module load MDTraj/1.9.7-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MEGA, load one of these modules using a module load command like:

              module load MEGA/11.0.10\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MEGAHIT, load one of these modules using a module load command like:

              module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MEGAN, load one of these modules using a module load command like:

              module load MEGAN/6.25.3-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MEM, load one of these modules using a module load command like:

              module load MEM/20191023-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MEME, load one of these modules using a module load command like:

              module load MEME/5.5.4-gompi-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MESS, load one of these modules using a module load command like:

              module load MESS/0.1.6-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using METIS, load one of these modules using a module load command like:

              module load METIS/5.1.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MIGRATE-N, load one of these modules using a module load command like:

              module load MIGRATE-N/5.0.4-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MMseqs2, load one of these modules using a module load command like:

              module load MMseqs2/14-7e284-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MOABS, load one of these modules using a module load command like:

              module load MOABS/1.3.9.6-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MONAI, load one of these modules using a module load command like:

              module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MOOSE, load one of these modules using a module load command like:

              module load MOOSE/2022-06-10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MPC, load one of these modules using a module load command like:

              module load MPC/1.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MPFR, load one of these modules using a module load command like:

              module load MPFR/4.2.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MRtrix, load one of these modules using a module load command like:

              module load MRtrix/3.0.4-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MSFragger, load one of these modules using a module load command like:

              module load MSFragger/4.0-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MUMPS, load one of these modules using a module load command like:

              module load MUMPS/5.6.1-foss-2023a-metis\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MUMmer, load one of these modules using a module load command like:

              module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MUSCLE, load one of these modules using a module load command like:

              module load MUSCLE/5.1.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MXNet, load one of these modules using a module load command like:

              module load MXNet/1.9.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MaSuRCA, load one of these modules using a module load command like:

              module load MaSuRCA/4.1.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mako, load one of these modules using a module load command like:

              module load Mako/1.2.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MariaDB-connector-c, load one of these modules using a module load command like:

              module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MariaDB, load one of these modules using a module load command like:

              module load MariaDB/10.9.3-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mash, load one of these modules using a module load command like:

              module load Mash/2.3-intel-compilers-2021.4.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Maven, load one of these modules using a module load command like:

              module load Maven/3.6.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MaxBin, load one of these modules using a module load command like:

              module load MaxBin/2.2.7-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MedPy, load one of these modules using a module load command like:

              module load MedPy/0.4.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Megalodon, load one of these modules using a module load command like:

              module load Megalodon/2.3.5-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mercurial, load one of these modules using a module load command like:

              module load Mercurial/6.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mesa, load one of these modules using a module load command like:

              module load Mesa/23.1.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Meson, load one of these modules using a module load command like:

              module load Meson/1.2.3-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mesquite, load one of these modules using a module load command like:

              module load Mesquite/2.3.0-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MetaBAT, load one of these modules using a module load command like:

              module load MetaBAT/2.15-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MetaEuk, load one of these modules using a module load command like:

              module load MetaEuk/6-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MetaPhlAn, load one of these modules using a module load command like:

              module load MetaPhlAn/4.0.6-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Metagenome-Atlas, load one of these modules using a module load command like:

              module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MethylDackel, load one of these modules using a module load command like:

              module load MethylDackel/0.5.0-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MiXCR, load one of these modules using a module load command like:

              module load MiXCR/4.6.0-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MicrobeAnnotator, load one of these modules using a module load command like:

              module load MicrobeAnnotator/2.0.5-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mikado, load one of these modules using a module load command like:

              module load Mikado/2.3.4-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MinCED, load one of these modules using a module load command like:

              module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MinPath, load one of these modules using a module load command like:

              module load MinPath/1.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Miniconda3, load one of these modules using a module load command like:

              module load Miniconda3/23.5.2-0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Minipolish, load one of these modules using a module load command like:

              module load Minipolish/0.1.3-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MitoHiFi, load one of these modules using a module load command like:

              module load MitoHiFi/3.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ModelTest-NG, load one of these modules using a module load command like:

              module load ModelTest-NG/0.1.7-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Molden, load one of these modules using a module load command like:

              module load Molden/6.8-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Molekel, load one of these modules using a module load command like:

              module load Molekel/5.4.0-Linux_x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Mono, load one of these modules using a module load command like:

              module load Mono/6.8.0.105-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Monocle3, load one of these modules using a module load command like:

              module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MrBayes, load one of these modules using a module load command like:

              module load MrBayes/3.2.7-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MuJoCo, load one of these modules using a module load command like:

              module load MuJoCo/2.3.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MultiQC, load one of these modules using a module load command like:

              module load MultiQC/1.14-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MultilevelEstimators, load one of these modules using a module load command like:

              module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Multiwfn, load one of these modules using a module load command like:

              module load Multiwfn/3.6-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using MyCC, load one of these modules using a module load command like:

              module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Myokit, load one of these modules using a module load command like:

              module load Myokit/1.32.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NAMD, load one of these modules using a module load command like:

              module load NAMD/2.14-foss-2023a-mpi\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NASM, load one of these modules using a module load command like:

              module load NASM/2.16.01-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NCCL, load one of these modules using a module load command like:

              module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NCL, load one of these modules using a module load command like:

              module load NCL/6.6.2-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NCO, load one of these modules using a module load command like:

              module load NCO/5.0.6-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NECI, load one of these modules using a module load command like:

              module load NECI/20230620-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NEURON, load one of these modules using a module load command like:

              module load NEURON/7.8.2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NGS, load one of these modules using a module load command like:

              module load NGS/2.11.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NGSpeciesID, load one of these modules using a module load command like:

              module load NGSpeciesID/0.1.2.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NLMpy, load one of these modules using a module load command like:

              module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NLTK, load one of these modules using a module load command like:

              module load NLTK/3.8.1-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NLopt, load one of these modules using a module load command like:

              module load NLopt/2.7.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NOVOPlasty, load one of these modules using a module load command like:

              module load NOVOPlasty/3.7-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NSPR, load one of these modules using a module load command like:

              module load NSPR/4.35-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NSS, load one of these modules using a module load command like:

              module load NSS/3.89.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NVHPC, load one of these modules using a module load command like:

              module load NVHPC/21.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanoCaller, load one of these modules using a module load command like:

              module load NanoCaller/3.4.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanoComp, load one of these modules using a module load command like:

              module load NanoComp/1.13.1-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanoFilt, load one of these modules using a module load command like:

              module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanoPlot, load one of these modules using a module load command like:

              module load NanoPlot/1.33.0-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanoStat, load one of these modules using a module load command like:

              module load NanoStat/1.6.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NanopolishComp, load one of these modules using a module load command like:

              module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NetPyNE, load one of these modules using a module load command like:

              module load NetPyNE/1.0.2.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NewHybrids, load one of these modules using a module load command like:

              module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NextGenMap, load one of these modules using a module load command like:

              module load NextGenMap/0.5.5-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Nextflow, load one of these modules using a module load command like:

              module load Nextflow/23.10.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using NiBabel, load one of these modules using a module load command like:

              module load NiBabel/4.0.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Nim, load one of these modules using a module load command like:

              module load Nim/1.6.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ninja, load one of these modules using a module load command like:

              module load Ninja/1.11.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Nipype, load one of these modules using a module load command like:

              module load Nipype/1.8.5-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OBITools3, load one of these modules using a module load command like:

              module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ONNX-Runtime, load one of these modules using a module load command like:

              module load ONNX-Runtime/1.16.3-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ONNX, load one of these modules using a module load command like:

              module load ONNX/1.15.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OPERA-MS, load one of these modules using a module load command like:

              module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ORCA, load one of these modules using a module load command like:

              module load ORCA/5.0.4-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

              module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Oases, load one of these modules using a module load command like:

              module load Oases/20180312-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Omnipose, load one of these modules using a module load command like:

              module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenAI-Gym, load one of these modules using a module load command like:

              module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenBLAS, load one of these modules using a module load command like:

              module load OpenBLAS/0.3.24-GCC-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenBabel, load one of these modules using a module load command like:

              module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenCV, load one of these modules using a module load command like:

              module load OpenCV/4.6.0-foss-2022a-contrib\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenCoarrays, load one of these modules using a module load command like:

              module load OpenCoarrays/2.8.0-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenEXR, load one of these modules using a module load command like:

              module load OpenEXR/3.1.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenFOAM-Extend, load one of these modules using a module load command like:

              module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenFOAM, load one of these modules using a module load command like:

              module load OpenFOAM/v2206-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenFace, load one of these modules using a module load command like:

              module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenFold, load one of these modules using a module load command like:

              module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenForceField, load one of these modules using a module load command like:

              module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenImageIO, load one of these modules using a module load command like:

              module load OpenImageIO/2.0.12-iimpi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenJPEG, load one of these modules using a module load command like:

              module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenMM-PLUMED, load one of these modules using a module load command like:

              module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenMM, load one of these modules using a module load command like:

              module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenMMTools, load one of these modules using a module load command like:

              module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenMPI, load one of these modules using a module load command like:

              module load OpenMPI/4.1.6-GCC-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenMolcas, load one of these modules using a module load command like:

              module load OpenMolcas/21.06-iomkl-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenPGM, load one of these modules using a module load command like:

              module load OpenPGM/5.2.122-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenPIV, load one of these modules using a module load command like:

              module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenSSL, load one of these modules using a module load command like:

              module load OpenSSL/1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenSees, load one of these modules using a module load command like:

              module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenSlide-Java, load one of these modules using a module load command like:

              module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OpenSlide, load one of these modules using a module load command like:

              module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Optuna, load one of these modules using a module load command like:

              module load Optuna/3.1.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using OrthoFinder, load one of these modules using a module load command like:

              module load OrthoFinder/2.5.5-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Osi, load one of these modules using a module load command like:

              module load Osi/0.108.9-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PASA, load one of these modules using a module load command like:

              module load PASA/2.5.3-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PBGZIP, load one of these modules using a module load command like:

              module load PBGZIP/20160804-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PCRE, load one of these modules using a module load command like:

              module load PCRE/8.45-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PCRE2, load one of these modules using a module load command like:

              module load PCRE2/10.42-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PEAR, load one of these modules using a module load command like:

              module load PEAR/0.9.11-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PETSc, load one of these modules using a module load command like:

              module load PETSc/3.18.4-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PHYLIP, load one of these modules using a module load command like:

              module load PHYLIP/3.697-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PICRUSt2, load one of these modules using a module load command like:

              module load PICRUSt2/2.5.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PLAMS, load one of these modules using a module load command like:

              module load PLAMS/1.5.1-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PLINK, load one of these modules using a module load command like:

              module load PLINK/2.00a3.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PLUMED, load one of these modules using a module load command like:

              module load PLUMED/2.9.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PLY, load one of these modules using a module load command like:

              module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PMIx, load one of these modules using a module load command like:

              module load PMIx/4.2.6-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using POT, load one of these modules using a module load command like:

              module load POT/0.9.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

              The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using POV-Ray, load one of these modules using a module load command like:

              module load POV-Ray/3.7.0.8-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PPanGGOLiN, load one of these modules using a module load command like:

              module load PPanGGOLiN/1.1.136-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PRANK, load one of these modules using a module load command like:

              module load PRANK/170427-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PRINSEQ, load one of these modules using a module load command like:

              module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PRISMS-PF, load one of these modules using a module load command like:

              module load PRISMS-PF/2.2-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PROJ, load one of these modules using a module load command like:

              module load PROJ/9.2.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pandoc, load one of these modules using a module load command like:

              module load Pandoc/2.13\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pango, load one of these modules using a module load command like:

              module load Pango/1.50.14-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ParMETIS, load one of these modules using a module load command like:

              module load ParMETIS/4.0.3-iimpi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ParMGridGen, load one of these modules using a module load command like:

              module load ParMGridGen/1.0-iimpi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ParaView, load one of these modules using a module load command like:

              module load ParaView/5.11.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ParmEd, load one of these modules using a module load command like:

              module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Parsl, load one of these modules using a module load command like:

              module load Parsl/2023.7.17-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PartitionFinder, load one of these modules using a module load command like:

              module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

              module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Perl, load one of these modules using a module load command like:

              module load Perl/5.38.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Phenoflow, load one of these modules using a module load command like:

              module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PhyloPhlAn, load one of these modules using a module load command like:

              module load PhyloPhlAn/3.0.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pillow-SIMD, load one of these modules using a module load command like:

              module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pillow, load one of these modules using a module load command like:

              module load Pillow/10.2.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pilon, load one of these modules using a module load command like:

              module load Pilon/1.23-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pint, load one of these modules using a module load command like:

              module load Pint/0.22-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PnetCDF, load one of these modules using a module load command like:

              module load PnetCDF/1.12.3-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Porechop, load one of these modules using a module load command like:

              module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PostgreSQL, load one of these modules using a module load command like:

              module load PostgreSQL/16.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Primer3, load one of these modules using a module load command like:

              module load Primer3/2.5.0-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ProBiS, load one of these modules using a module load command like:

              module load ProBiS/20230403-gompi-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ProtHint, load one of these modules using a module load command like:

              module load ProtHint/2.6.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PsiCLASS, load one of these modules using a module load command like:

              module load PsiCLASS/1.0.3-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PuLP, load one of these modules using a module load command like:

              module load PuLP/2.8.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyBerny, load one of these modules using a module load command like:

              module load PyBerny/0.6.3-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyCairo, load one of these modules using a module load command like:

              module load PyCairo/1.21.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyCalib, load one of these modules using a module load command like:

              module load PyCalib/20230531-gfbf-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyCheMPS2, load one of these modules using a module load command like:

              module load PyCheMPS2/1.8.12-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyFoam, load one of these modules using a module load command like:

              module load PyFoam/2020.5-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyGEOS, load one of these modules using a module load command like:

              module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyGObject, load one of these modules using a module load command like:

              module load PyGObject/3.42.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyInstaller, load one of these modules using a module load command like:

              module load PyInstaller/6.3.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyKeOps, load one of these modules using a module load command like:

              module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyMC, load one of these modules using a module load command like:

              module load PyMC/5.9.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyMC3, load one of these modules using a module load command like:

              module load PyMC3/3.11.1-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyMDE, load one of these modules using a module load command like:

              module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyMOL, load one of these modules using a module load command like:

              module load PyMOL/2.5.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyOD, load one of these modules using a module load command like:

              module load PyOD/0.8.7-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyOpenCL, load one of these modules using a module load command like:

              module load PyOpenCL/2023.1.4-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyOpenGL, load one of these modules using a module load command like:

              module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyPy, load one of these modules using a module load command like:

              module load PyPy/7.3.12-3.10\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyQt5, load one of these modules using a module load command like:

              module load PyQt5/5.15.7-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyQtGraph, load one of these modules using a module load command like:

              module load PyQtGraph/0.13.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyRETIS, load one of these modules using a module load command like:

              module load PyRETIS/2.5.0-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyRe, load one of these modules using a module load command like:

              module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PySCF, load one of these modules using a module load command like:

              module load PySCF/2.4.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyStan, load one of these modules using a module load command like:

              module load PyStan/2.19.1.1-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTables, load one of these modules using a module load command like:

              module load PyTables/3.8.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTensor, load one of these modules using a module load command like:

              module load PyTensor/2.17.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTorch-Geometric, load one of these modules using a module load command like:

              module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTorch-Ignite, load one of these modules using a module load command like:

              module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTorch-Lightning, load one of these modules using a module load command like:

              module load PyTorch-Lightning/2.1.3-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyTorch, load one of these modules using a module load command like:

              module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyVCF, load one of these modules using a module load command like:

              module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyVCF3, load one of these modules using a module load command like:

              module load PyVCF3/1.0.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyWBGT, load one of these modules using a module load command like:

              module load PyWBGT/1.0.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyWavelets, load one of these modules using a module load command like:

              module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyYAML, load one of these modules using a module load command like:

              module load PyYAML/6.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PyZMQ, load one of these modules using a module load command like:

              module load PyZMQ/25.1.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using PycURL, load one of these modules using a module load command like:

              module load PycURL/7.45.2-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pychopper, load one of these modules using a module load command like:

              module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pyomo, load one of these modules using a module load command like:

              module load Pyomo/6.4.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Pysam, load one of these modules using a module load command like:

              module load Pysam/0.22.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Python-bundle-PyPI, load one of these modules using a module load command like:

              module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Python, load one of these modules using a module load command like:

              module load Python/3.11.5-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QCA, load one of these modules using a module load command like:

              module load QCA/2.3.5-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QCxMS, load one of these modules using a module load command like:

              module load QCxMS/5.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QD, load one of these modules using a module load command like:

              module load QD/2.3.17-NVHPC-21.2-20160110\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QGIS, load one of these modules using a module load command like:

              module load QGIS/3.28.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QIIME2, load one of these modules using a module load command like:

              module load QIIME2/2023.5.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QScintilla, load one of these modules using a module load command like:

              module load QScintilla/2.11.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QUAST, load one of these modules using a module load command like:

              module load QUAST/5.2.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qhull, load one of these modules using a module load command like:

              module load Qhull/2020.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qt5, load one of these modules using a module load command like:

              module load Qt5/5.15.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qt5Webkit, load one of these modules using a module load command like:

              module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QtKeychain, load one of these modules using a module load command like:

              module load QtKeychain/0.13.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QtPy, load one of these modules using a module load command like:

              module load QtPy/2.3.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qtconsole, load one of these modules using a module load command like:

              module load Qtconsole/5.4.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QuPath, load one of these modules using a module load command like:

              module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qualimap, load one of these modules using a module load command like:

              module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QuantumESPRESSO, load one of these modules using a module load command like:

              module load QuantumESPRESSO/7.0-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using QuickFF, load one of these modules using a module load command like:

              module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Qwt, load one of these modules using a module load command like:

              module load Qwt/6.2.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using R-INLA, load one of these modules using a module load command like:

              module load R-INLA/24.01.18-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

              The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

              module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using R-bundle-CRAN, load one of these modules using a module load command like:

              module load R-bundle-CRAN/2023.12-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

              The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using R, load one of these modules using a module load command like:

              module load R/4.3.2-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

              The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using R2jags, load one of these modules using a module load command like:

              module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RASPA2, load one of these modules using a module load command like:

              module load RASPA2/2.0.41-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RAxML-NG, load one of these modules using a module load command like:

              module load RAxML-NG/1.2.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RAxML, load one of these modules using a module load command like:

              module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RDFlib, load one of these modules using a module load command like:

              module load RDFlib/6.2.0-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RDKit, load one of these modules using a module load command like:

              module load RDKit/2022.09.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RDP-Classifier, load one of these modules using a module load command like:

              module load RDP-Classifier/2.13-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RE2, load one of these modules using a module load command like:

              module load RE2/2023-08-01-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RLCard, load one of these modules using a module load command like:

              module load RLCard/1.0.9-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RMBlast, load one of these modules using a module load command like:

              module load RMBlast/2.11.0-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RNA-Bloom, load one of these modules using a module load command like:

              module load RNA-Bloom/2.0.1-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ROOT, load one of these modules using a module load command like:

              module load ROOT/6.26.06-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RSEM, load one of these modules using a module load command like:

              module load RSEM/1.3.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RSeQC, load one of these modules using a module load command like:

              module load RSeQC/4.0.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RStudio-Server, load one of these modules using a module load command like:

              module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RTG-Tools, load one of these modules using a module load command like:

              module load RTG-Tools/3.12.1-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Racon, load one of these modules using a module load command like:

              module load Racon/1.5.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RagTag, load one of these modules using a module load command like:

              module load RagTag/2.0.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ragout, load one of these modules using a module load command like:

              module load Ragout/2.3-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RapidJSON, load one of these modules using a module load command like:

              module load RapidJSON/1.1.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Raven, load one of these modules using a module load command like:

              module load Raven/1.8.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ray-project, load one of these modules using a module load command like:

              module load Ray-project/1.13.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ray, load one of these modules using a module load command like:

              module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ReFrame, load one of these modules using a module load command like:

              module load ReFrame/4.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Redis, load one of these modules using a module load command like:

              module load Redis/7.0.8-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RegTools, load one of these modules using a module load command like:

              module load RegTools/1.0.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RepeatMasker, load one of these modules using a module load command like:

              module load RepeatMasker/4.1.2-p1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ResistanceGA, load one of these modules using a module load command like:

              module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RevBayes, load one of these modules using a module load command like:

              module load RevBayes/1.2.1-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Rgurobi, load one of these modules using a module load command like:

              module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RheoTool, load one of these modules using a module load command like:

              module load RheoTool/5.0-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Rmath, load one of these modules using a module load command like:

              module load Rmath/4.3.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

              The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using RnBeads, load one of these modules using a module load command like:

              module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Roary, load one of these modules using a module load command like:

              module load Roary/3.13.0-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Ruby, load one of these modules using a module load command like:

              module load Ruby/3.0.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Rust, load one of these modules using a module load command like:

              module load Rust/1.75.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SAMtools, load one of these modules using a module load command like:

              module load SAMtools/1.18-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SBCL, load one of these modules using a module load command like:

              module load SBCL/2.2.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SCENIC, load one of these modules using a module load command like:

              module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SCGid, load one of these modules using a module load command like:

              module load SCGid/0.9b0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SCOTCH, load one of these modules using a module load command like:

              module load SCOTCH/7.0.3-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SCons, load one of these modules using a module load command like:

              module load SCons/4.5.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SCopeLoomR, load one of these modules using a module load command like:

              module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SDL2, load one of these modules using a module load command like:

              module load SDL2/2.28.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SDSL, load one of these modules using a module load command like:

              module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SEACells, load one of these modules using a module load command like:

              module load SEACells/20230731-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SECAPR, load one of these modules using a module load command like:

              module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SELFIES, load one of these modules using a module load command like:

              module load SELFIES/2.1.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SEPP, load one of these modules using a module load command like:

              module load SEPP/4.5.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SHAP, load one of these modules using a module load command like:

              module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SISSO++, load one of these modules using a module load command like:

              module load SISSO++/1.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SISSO, load one of these modules using a module load command like:

              module load SISSO/3.1-20220324-iimpi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SKESA, load one of these modules using a module load command like:

              module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SLATEC, load one of these modules using a module load command like:

              module load SLATEC/4.1-GCC-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SLEPc, load one of these modules using a module load command like:

              module load SLEPc/3.18.2-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SLiM, load one of these modules using a module load command like:

              module load SLiM/3.4-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SMAP, load one of these modules using a module load command like:

              module load SMAP/4.6.5-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SMC++, load one of these modules using a module load command like:

              module load SMC++/1.15.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SMV, load one of these modules using a module load command like:

              module load SMV/6.7.17-iccifort-2020.4.304\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SNAP-ESA-python, load one of these modules using a module load command like:

              module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SNAP-ESA, load one of these modules using a module load command like:

              module load SNAP-ESA/9.0.0-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SNAP, load one of these modules using a module load command like:

              module load SNAP/2.0.1-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

              module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SPAdes, load one of these modules using a module load command like:

              module load SPAdes/3.15.5-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SPM, load one of these modules using a module load command like:

              module load SPM/12.5_r7771-MATLAB-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SPOTPY, load one of these modules using a module load command like:

              module load SPOTPY/1.5.14-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SQLite, load one of these modules using a module load command like:

              module load SQLite/3.43.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SRA-Toolkit, load one of these modules using a module load command like:

              module load SRA-Toolkit/3.0.3-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SRPRISM, load one of these modules using a module load command like:

              module load SRPRISM/3.1.2-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SRST2, load one of these modules using a module load command like:

              module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SSPACE_Basic, load one of these modules using a module load command like:

              module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SSW, load one of these modules using a module load command like:

              module load SSW/1.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

              The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using STACEY, load one of these modules using a module load command like:

              module load STACEY/1.2.5-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using STAR, load one of these modules using a module load command like:

              module load STAR/2.7.11a-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using STREAM, load one of these modules using a module load command like:

              module load STREAM/5.10-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

              The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using STRique, load one of these modules using a module load command like:

              module load STRique/0.4.2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SUNDIALS, load one of these modules using a module load command like:

              module load SUNDIALS/6.6.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SUPPA, load one of these modules using a module load command like:

              module load SUPPA/2.3-20231005-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SVIM, load one of these modules using a module load command like:

              module load SVIM/2.0.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SWIG, load one of these modules using a module load command like:

              module load SWIG/4.1.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sabre, load one of these modules using a module load command like:

              module load Sabre/2013-09-28-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sailfish, load one of these modules using a module load command like:

              module load Sailfish/0.10.1-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Salmon, load one of these modules using a module load command like:

              module load Salmon/1.9.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sambamba, load one of these modules using a module load command like:

              module load Sambamba/1.0.1-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Satsuma2, load one of these modules using a module load command like:

              module load Satsuma2/20220304-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ScaFaCoS, load one of these modules using a module load command like:

              module load ScaFaCoS/1.0.1-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ScaLAPACK, load one of these modules using a module load command like:

              module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SciPy-bundle, load one of these modules using a module load command like:

              module load SciPy-bundle/2023.11-gfbf-2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Seaborn, load one of these modules using a module load command like:

              module load Seaborn/0.13.2-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SemiBin, load one of these modules using a module load command like:

              module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sentence-Transformers, load one of these modules using a module load command like:

              module load Sentence-Transformers/2.2.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SentencePiece, load one of these modules using a module load command like:

              module load SentencePiece/0.1.99-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeqAn, load one of these modules using a module load command like:

              module load SeqAn/2.4.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeqKit, load one of these modules using a module load command like:

              module load SeqKit/2.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeqLib, load one of these modules using a module load command like:

              module load SeqLib/1.2.0-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Serf, load one of these modules using a module load command like:

              module load Serf/1.3.9-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Seurat, load one of these modules using a module load command like:

              module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeuratData, load one of these modules using a module load command like:

              module load SeuratData/20210514-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeuratDisk, load one of these modules using a module load command like:

              module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SeuratWrappers, load one of these modules using a module load command like:

              module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Shapely, load one of these modules using a module load command like:

              module load Shapely/2.0.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Shasta, load one of these modules using a module load command like:

              module load Shasta/0.8.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Short-Pair, load one of these modules using a module load command like:

              module load Short-Pair/20170125-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SiNVICT, load one of these modules using a module load command like:

              module load SiNVICT/1.0-20180817-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sibelia, load one of these modules using a module load command like:

              module load Sibelia/3.0.7-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SimNIBS, load one of these modules using a module load command like:

              module load SimNIBS/3.2.4-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SimPEG, load one of these modules using a module load command like:

              module load SimPEG/0.18.1-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SimpleElastix, load one of these modules using a module load command like:

              module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SimpleITK, load one of these modules using a module load command like:

              module load SimpleITK/2.1.1.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SlamDunk, load one of these modules using a module load command like:

              module load SlamDunk/0.4.3-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Sniffles, load one of these modules using a module load command like:

              module load Sniffles/2.0.7-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SoX, load one of these modules using a module load command like:

              module load SoX/14.4.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Spark, load one of these modules using a module load command like:

              module load Spark/3.5.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SpatialDE, load one of these modules using a module load command like:

              module load SpatialDE/1.1.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Spyder, load one of these modules using a module load command like:

              module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SqueezeMeta, load one of these modules using a module load command like:

              module load SqueezeMeta/1.5.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Squidpy, load one of these modules using a module load command like:

              module load Squidpy/1.2.2-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Stacks, load one of these modules using a module load command like:

              module load Stacks/2.53-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Stata, load one of these modules using a module load command like:

              module load Stata/15\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Statistics-R, load one of these modules using a module load command like:

              module load Statistics-R/0.34-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

              The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using StringTie, load one of these modules using a module load command like:

              module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Structure, load one of these modules using a module load command like:

              module load Structure/2.3.4-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Structure_threader, load one of these modules using a module load command like:

              module load Structure_threader/1.3.10-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SuAVE-biomat, load one of these modules using a module load command like:

              module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Subread, load one of these modules using a module load command like:

              module load Subread/2.0.3-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Subversion, load one of these modules using a module load command like:

              module load Subversion/1.14.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SuiteSparse, load one of these modules using a module load command like:

              module load SuiteSparse/7.1.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SuperLU, load one of these modules using a module load command like:

              module load SuperLU/5.2.2-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

              The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using SuperLU_DIST, load one of these modules using a module load command like:

              module load SuperLU_DIST/8.1.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Szip, load one of these modules using a module load command like:

              module load Szip/2.1.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TALON, load one of these modules using a module load command like:

              module load TALON/5.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TAMkin, load one of these modules using a module load command like:

              module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TCLAP, load one of these modules using a module load command like:

              module load TCLAP/1.2.4-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

              module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TEtranscripts, load one of these modules using a module load command like:

              module load TEtranscripts/2.2.0-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TOBIAS, load one of these modules using a module load command like:

              module load TOBIAS/0.12.12-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TOPAS, load one of these modules using a module load command like:

              module load TOPAS/3.9-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TRF, load one of these modules using a module load command like:

              module load TRF/4.09.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TRUST4, load one of these modules using a module load command like:

              module load TRUST4/1.0.6-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Tcl, load one of these modules using a module load command like:

              module load Tcl/8.6.13-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TensorFlow, load one of these modules using a module load command like:

              module load TensorFlow/2.13.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Theano, load one of these modules using a module load command like:

              module load Theano/1.1.2-intel-2021b-PyMC\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Tk, load one of these modules using a module load command like:

              module load Tk/8.6.13-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Tkinter, load one of these modules using a module load command like:

              module load Tkinter/3.11.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Togl, load one of these modules using a module load command like:

              module load Togl/2.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Tombo, load one of these modules using a module load command like:

              module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TopHat, load one of these modules using a module load command like:

              module load TopHat/2.1.2-iimpi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TransDecoder, load one of these modules using a module load command like:

              module load TransDecoder/5.5.0-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TranscriptClean, load one of these modules using a module load command like:

              module load TranscriptClean/2.0.2-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Transformers, load one of these modules using a module load command like:

              module load Transformers/4.30.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TreeMix, load one of these modules using a module load command like:

              module load TreeMix/1.13-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Trilinos, load one of these modules using a module load command like:

              module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Trim_Galore, load one of these modules using a module load command like:

              module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Trimmomatic, load one of these modules using a module load command like:

              module load Trimmomatic/0.39-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Trinity, load one of these modules using a module load command like:

              module load Trinity/2.15.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Triton, load one of these modules using a module load command like:

              module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Trycycler, load one of these modules using a module load command like:

              module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using TurboVNC, load one of these modules using a module load command like:

              module load TurboVNC/2.2.6-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UCC, load one of these modules using a module load command like:

              module load UCC/1.2.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UCLUST, load one of these modules using a module load command like:

              module load UCLUST/1.2.22q-i86linux64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UCX-CUDA, load one of these modules using a module load command like:

              module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UCX, load one of these modules using a module load command like:

              module load UCX/1.15.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UDUNITS, load one of these modules using a module load command like:

              module load UDUNITS/2.2.28-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UFL, load one of these modules using a module load command like:

              module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UMI-tools, load one of these modules using a module load command like:

              module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UQTk, load one of these modules using a module load command like:

              module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

              The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using USEARCH, load one of these modules using a module load command like:

              module load USEARCH/11.0.667-i86linux32\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UnZip, load one of these modules using a module load command like:

              module load UnZip/6.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

              The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using UniFrac, load one of these modules using a module load command like:

              module load UniFrac/1.3.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Unicycler, load one of these modules using a module load command like:

              module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Unidecode, load one of these modules using a module load command like:

              module load Unidecode/1.3.6-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VASP, load one of these modules using a module load command like:

              module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VBZ-Compression, load one of these modules using a module load command like:

              module load VBZ-Compression/1.0.3-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VCFtools, load one of these modules using a module load command like:

              module load VCFtools/0.1.16-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VEP, load one of these modules using a module load command like:

              module load VEP/107-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VESTA, load one of these modules using a module load command like:

              module load VESTA/3.5.8-gtk3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VMD, load one of these modules using a module load command like:

              module load VMD/1.9.4a51-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VMTK, load one of these modules using a module load command like:

              module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VSCode, load one of these modules using a module load command like:

              module load VSCode/1.85.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VSEARCH, load one of these modules using a module load command like:

              module load VSEARCH/2.22.1-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VTK, load one of these modules using a module load command like:

              module load VTK/9.2.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VTune, load one of these modules using a module load command like:

              module load VTune/2019_update2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Vala, load one of these modules using a module load command like:

              module load Vala/0.52.4-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Valgrind, load one of these modules using a module load command like:

              module load Valgrind/3.20.0-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VarScan, load one of these modules using a module load command like:

              module load VarScan/2.4.4-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Velvet, load one of these modules using a module load command like:

              module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VirSorter2, load one of these modules using a module load command like:

              module load VirSorter2/2.2.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using VisPy, load one of these modules using a module load command like:

              module load VisPy/0.12.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Voro++, load one of these modules using a module load command like:

              module load Voro++/0.4.6-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WFA2, load one of these modules using a module load command like:

              module load WFA2/2.3.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WHAM, load one of these modules using a module load command like:

              module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WIEN2k, load one of these modules using a module load command like:

              module load WIEN2k/21.1-intel-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WPS, load one of these modules using a module load command like:

              module load WPS/4.1-intel-2019b-dmpar\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WRF, load one of these modules using a module load command like:

              module load WRF/4.1.3-intel-2019b-dmpar\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Wannier90, load one of these modules using a module load command like:

              module load Wannier90/3.1.0-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Wayland, load one of these modules using a module load command like:

              module load Wayland/1.22.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Waylandpp, load one of these modules using a module load command like:

              module load Waylandpp/1.0.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WebKitGTK+, load one of these modules using a module load command like:

              module load WebKitGTK+/2.37.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WhatsHap, load one of these modules using a module load command like:

              module load WhatsHap/1.7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Winnowmap, load one of these modules using a module load command like:

              module load Winnowmap/1.0-GCC-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using WisecondorX, load one of these modules using a module load command like:

              module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

              The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using X11, load one of these modules using a module load command like:

              module load X11/20230603-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XCFun, load one of these modules using a module load command like:

              module load XCFun/2.1.1-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XCrySDen, load one of these modules using a module load command like:

              module load XCrySDen/1.6.2-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XGBoost, load one of these modules using a module load command like:

              module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XML-Compile, load one of these modules using a module load command like:

              module load XML-Compile/1.63-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XML-LibXML, load one of these modules using a module load command like:

              module load XML-LibXML/2.0208-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XZ, load one of these modules using a module load command like:

              module load XZ/5.4.4-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Xerces-C++, load one of these modules using a module load command like:

              module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

              The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using XlsxWriter, load one of these modules using a module load command like:

              module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Xvfb, load one of these modules using a module load command like:

              module load Xvfb/21.1.8-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using YACS, load one of these modules using a module load command like:

              module load YACS/0.1.8-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

              The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using YANK, load one of these modules using a module load command like:

              module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

              The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using YAXT, load one of these modules using a module load command like:

              module load YAXT/0.9.1-gompi-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Yambo, load one of these modules using a module load command like:

              module load Yambo/5.1.2-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Yasm, load one of these modules using a module load command like:

              module load Yasm/1.3.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Z3, load one of these modules using a module load command like:

              module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Zeo++, load one of these modules using a module load command like:

              module load Zeo++/0.3-intel-compilers-2023.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ZeroMQ, load one of these modules using a module load command like:

              module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Zip, load one of these modules using a module load command like:

              module load Zip/3.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using Zopfli, load one of these modules using a module load command like:

              module load Zopfli/1.0.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

              The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using adjustText, load one of these modules using a module load command like:

              module load adjustText/0.7.3-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using aiohttp, load one of these modules using a module load command like:

              module load aiohttp/3.8.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

              The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using alevin-fry, load one of these modules using a module load command like:

              module load alevin-fry/0.4.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

              The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using alleleCount, load one of these modules using a module load command like:

              module load alleleCount/4.3.0-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

              The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using alleleIntegrator, load one of these modules using a module load command like:

              module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using alsa-lib, load one of these modules using a module load command like:

              module load alsa-lib/1.2.8-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using anadama2, load one of these modules using a module load command like:

              module load anadama2/0.10.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using angsd, load one of these modules using a module load command like:

              module load angsd/0.940-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

              The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using anndata, load one of these modules using a module load command like:

              module load anndata/0.10.5.post1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ant, load one of these modules using a module load command like:

              module load ant/1.10.12-Java-17\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

              The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using antiSMASH, load one of these modules using a module load command like:

              module load antiSMASH/6.0.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using anvio, load one of these modules using a module load command like:

              module load anvio/8-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using any2fasta, load one of these modules using a module load command like:

              module load any2fasta/0.4.2-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

              The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using apex, load one of these modules using a module load command like:

              module load apex/20210420-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

              The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using archspec, load one of these modules using a module load command like:

              module load archspec/0.1.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

              The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using argtable, load one of these modules using a module load command like:

              module load argtable/2.13-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using aria2, load one of these modules using a module load command like:

              module load aria2/1.35.0-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

              The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using arpack-ng, load one of these modules using a module load command like:

              module load arpack-ng/3.9.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

              The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using arrow-R, load one of these modules using a module load command like:

              module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using arrow, load one of these modules using a module load command like:

              module load arrow/0.17.1-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using at-spi2-atk, load one of these modules using a module load command like:

              module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

              The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using at-spi2-core, load one of these modules using a module load command like:

              module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using atools, load one of these modules using a module load command like:

              module load atools/1.5.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

              The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using attr, load one of these modules using a module load command like:

              module load attr/2.5.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

              The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using attrdict, load one of these modules using a module load command like:

              module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using attrdict3, load one of these modules using a module load command like:

              module load attrdict3/2.0.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

              The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using augur, load one of these modules using a module load command like:

              module load augur/7.0.2-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

              The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using autopep8, load one of these modules using a module load command like:

              module load autopep8/2.0.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using awscli, load one of these modules using a module load command like:

              module load awscli/2.11.21-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using babl, load one of these modules using a module load command like:

              module load babl/0.1.86-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bam-readcount, load one of these modules using a module load command like:

              module load bam-readcount/0.8.0-GCC-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bamFilters, load one of these modules using a module load command like:

              module load bamFilters/2022-06-30-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using barrnap, load one of these modules using a module load command like:

              module load barrnap/0.9-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using basemap, load one of these modules using a module load command like:

              module load basemap/1.3.9-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bcbio-gff, load one of these modules using a module load command like:

              module load bcbio-gff/0.7.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bcgTree, load one of these modules using a module load command like:

              module load bcgTree/1.2.0-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bcl-convert, load one of these modules using a module load command like:

              module load bcl-convert/4.0.3-2el7.x86_64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bcl2fastq2, load one of these modules using a module load command like:

              module load bcl2fastq2/2.20.0-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using beagle-lib, load one of these modules using a module load command like:

              module load beagle-lib/4.0.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using binutils, load one of these modules using a module load command like:

              module load binutils/2.40-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

              The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using biobakery-workflows, load one of these modules using a module load command like:

              module load biobakery-workflows/3.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using biobambam2, load one of these modules using a module load command like:

              module load biobambam2/2.0.185-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

              The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using biogeme, load one of these modules using a module load command like:

              module load biogeme/3.2.10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

              The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using biom-format, load one of these modules using a module load command like:

              module load biom-format/2.1.15-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bmtagger, load one of these modules using a module load command like:

              module load bmtagger/3.101-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bokeh, load one of these modules using a module load command like:

              module load bokeh/3.2.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using boto3, load one of these modules using a module load command like:

              module load boto3/1.34.10-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bpp, load one of these modules using a module load command like:

              module load bpp/4.4.0-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using btllib, load one of these modules using a module load command like:

              module load btllib/1.7.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

              The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using build, load one of these modules using a module load command like:

              module load build/0.10.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

              The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using buildenv, load one of these modules using a module load command like:

              module load buildenv/default-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using buildingspy, load one of these modules using a module load command like:

              module load buildingspy/4.0.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bwa-meth, load one of these modules using a module load command like:

              module load bwa-meth/0.2.6-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bwidget, load one of these modules using a module load command like:

              module load bwidget/1.9.15-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bx-python, load one of these modules using a module load command like:

              module load bx-python/0.10.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using bzip2, load one of these modules using a module load command like:

              module load bzip2/1.0.8-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

              The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using c-ares, load one of these modules using a module load command like:

              module load c-ares/1.18.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cURL, load one of these modules using a module load command like:

              module load cURL/8.3.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cairo, load one of these modules using a module load command like:

              module load cairo/1.17.8-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

              The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using canu, load one of these modules using a module load command like:

              module load canu/2.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using carputils, load one of these modules using a module load command like:

              module load carputils/20210513-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ccache, load one of these modules using a module load command like:

              module load ccache/4.6.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cctbx-base, load one of these modules using a module load command like:

              module load cctbx-base/2023.5-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cdbfasta, load one of these modules using a module load command like:

              module load cdbfasta/0.99-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cdo-bindings, load one of these modules using a module load command like:

              module load cdo-bindings/1.5.7-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cdsapi, load one of these modules using a module load command like:

              module load cdsapi/0.5.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cell2location, load one of these modules using a module load command like:

              module load cell2location/0.05-alpha-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cffi, load one of these modules using a module load command like:

              module load cffi/1.15.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

              The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using chemprop, load one of these modules using a module load command like:

              module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using chewBBACA, load one of these modules using a module load command like:

              module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cicero, load one of these modules using a module load command like:

              module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cimfomfa, load one of these modules using a module load command like:

              module load cimfomfa/22.273-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using code-cli, load one of these modules using a module load command like:

              module load code-cli/1.85.1-x64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

              The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using code-server, load one of these modules using a module load command like:

              module load code-server/4.9.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

              The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using colossalai, load one of these modules using a module load command like:

              module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using conan, load one of these modules using a module load command like:

              module load conan/1.60.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using configurable-http-proxy, load one of these modules using a module load command like:

              module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cooler, load one of these modules using a module load command like:

              module load cooler/0.9.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

              The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using coverage, load one of these modules using a module load command like:

              module load coverage/7.2.7-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cppy, load one of these modules using a module load command like:

              module load cppy/1.2.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cpu_features, load one of these modules using a module load command like:

              module load cpu_features/0.6.0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cryoDRGN, load one of these modules using a module load command like:

              module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cryptography, load one of these modules using a module load command like:

              module load cryptography/41.0.5-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cuDNN, load one of these modules using a module load command like:

              module load cuDNN/8.9.2.26-CUDA-12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cuTENSOR, load one of these modules using a module load command like:

              module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cutadapt, load one of these modules using a module load command like:

              module load cutadapt/4.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cuteSV, load one of these modules using a module load command like:

              module load cuteSV/2.0.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using cython-blis, load one of these modules using a module load command like:

              module load cython-blis/0.9.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dask, load one of these modules using a module load command like:

              module load dask/2023.12.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dbus-glib, load one of these modules using a module load command like:

              module load dbus-glib/0.112-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dclone, load one of these modules using a module load command like:

              module load dclone/2.3-0-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

              The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using deal.II, load one of these modules using a module load command like:

              module load deal.II/9.3.3-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

              The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using decona, load one of these modules using a module load command like:

              module load decona/0.1.2-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using deepTools, load one of these modules using a module load command like:

              module load deepTools/3.5.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using deepdiff, load one of these modules using a module load command like:

              module load deepdiff/6.7.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using detectron2, load one of these modules using a module load command like:

              module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

              The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using devbio-napari, load one of these modules using a module load command like:

              module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dicom2nifti, load one of these modules using a module load command like:

              module load dicom2nifti/2.3.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dijitso, load one of these modules using a module load command like:

              module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dill, load one of these modules using a module load command like:

              module load dill/0.3.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dlib, load one of these modules using a module load command like:

              module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dm-haiku, load one of these modules using a module load command like:

              module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dm-tree, load one of these modules using a module load command like:

              module load dm-tree/0.1.8-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dorado, load one of these modules using a module load command like:

              module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

              The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using double-conversion, load one of these modules using a module load command like:

              module load double-conversion/3.3.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using drmaa-python, load one of these modules using a module load command like:

              module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dtcwt, load one of these modules using a module load command like:

              module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using duplex-tools, load one of these modules using a module load command like:

              module load duplex-tools/0.3.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

              The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using dynesty, load one of these modules using a module load command like:

              module load dynesty/2.1.3-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using eSpeak-NG, load one of these modules using a module load command like:

              module load eSpeak-NG/1.50-gompi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ebGSEA, load one of these modules using a module load command like:

              module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ecCodes, load one of these modules using a module load command like:

              module load ecCodes/2.24.2-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using edlib, load one of these modules using a module load command like:

              module load edlib/1.3.9-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

              The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using eggnog-mapper, load one of these modules using a module load command like:

              module load eggnog-mapper/2.1.10-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

              The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using einops, load one of these modules using a module load command like:

              module load einops/0.4.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using elfutils, load one of these modules using a module load command like:

              module load elfutils/0.187-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

              The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using elprep, load one of these modules using a module load command like:

              module load elprep/5.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using enchant-2, load one of these modules using a module load command like:

              module load enchant-2/2.3.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using epiScanpy, load one of these modules using a module load command like:

              module load epiScanpy/0.4.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using exiv2, load one of these modules using a module load command like:

              module load exiv2/0.27.5-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using expat, load one of these modules using a module load command like:

              module load expat/2.5.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

              The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using expecttest, load one of these modules using a module load command like:

              module load expecttest/0.1.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fasta-reader, load one of these modules using a module load command like:

              module load fasta-reader/3.0.2-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fastahack, load one of these modules using a module load command like:

              module load fastahack/1.0.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fastai, load one of these modules using a module load command like:

              module load fastai/2.7.10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fastp, load one of these modules using a module load command like:

              module load fastp/0.23.2-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fermi-lite, load one of these modules using a module load command like:

              module load fermi-lite/20190320-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

              The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using festival, load one of these modules using a module load command like:

              module load festival/2.5.0-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fetchMG, load one of these modules using a module load command like:

              module load fetchMG/1.0-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ffnvcodec, load one of these modules using a module load command like:

              module load ffnvcodec/12.0.16.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

              The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using file, load one of these modules using a module load command like:

              module load file/5.43-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using filevercmp, load one of these modules using a module load command like:

              module load filevercmp/20191210-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using finder, load one of these modules using a module load command like:

              module load finder/1.1.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flair-NLP, load one of these modules using a module load command like:

              module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flatbuffers-python, load one of these modules using a module load command like:

              module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flatbuffers, load one of these modules using a module load command like:

              module load flatbuffers/23.5.26-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flex, load one of these modules using a module load command like:

              module load flex/2.6.4-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flit, load one of these modules using a module load command like:

              module load flit/3.9.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using flowFDA, load one of these modules using a module load command like:

              module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fmt, load one of these modules using a module load command like:

              module load fmt/10.1.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fontconfig, load one of these modules using a module load command like:

              module load fontconfig/2.14.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

              The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using foss, load one of these modules using a module load command like:

              module load foss/2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fosscuda, load one of these modules using a module load command like:

              module load fosscuda/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using freebayes, load one of these modules using a module load command like:

              module load freebayes/1.3.5-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

              The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using freeglut, load one of these modules using a module load command like:

              module load freeglut/3.2.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using freetype-py, load one of these modules using a module load command like:

              module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

              The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using freetype, load one of these modules using a module load command like:

              module load freetype/2.13.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

              The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using fsom, load one of these modules using a module load command like:

              module load fsom/20151117-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

              The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using funannotate, load one of these modules using a module load command like:

              module load funannotate/1.8.13-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using g2clib, load one of these modules using a module load command like:

              module load g2clib/1.6.0-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using g2lib, load one of these modules using a module load command like:

              module load g2lib/3.1.0-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

              The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using g2log, load one of these modules using a module load command like:

              module load g2log/1.0-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

              The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using garnett, load one of these modules using a module load command like:

              module load garnett/0.1.20-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gawk, load one of these modules using a module load command like:

              module load gawk/5.1.0-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gbasis, load one of these modules using a module load command like:

              module load gbasis/20210904-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gc, load one of these modules using a module load command like:

              module load gc/8.2.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gcccuda, load one of these modules using a module load command like:

              module load gcccuda/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gcloud, load one of these modules using a module load command like:

              module load gcloud/382.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gcsfs, load one of these modules using a module load command like:

              module load gcsfs/2023.12.2.post1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gdbm, load one of these modules using a module load command like:

              module load gdbm/1.18.1-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gdc-client, load one of these modules using a module load command like:

              module load gdc-client/1.6.0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gengetopt, load one of these modules using a module load command like:

              module load gengetopt/2.23-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using genomepy, load one of these modules using a module load command like:

              module load genomepy/0.15.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using genozip, load one of these modules using a module load command like:

              module load genozip/13.0.5-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gensim, load one of these modules using a module load command like:

              module load gensim/4.2.0-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

              The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using geopandas, load one of these modules using a module load command like:

              module load geopandas/0.12.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gettext, load one of these modules using a module load command like:

              module load gettext/0.22-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gexiv2, load one of these modules using a module load command like:

              module load gexiv2/0.12.2-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gfbf, load one of these modules using a module load command like:

              module load gfbf/2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gffread, load one of these modules using a module load command like:

              module load gffread/0.12.7-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gffutils, load one of these modules using a module load command like:

              module load gffutils/0.12-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gflags, load one of these modules using a module load command like:

              module load gflags/2.2.2-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using giflib, load one of these modules using a module load command like:

              module load giflib/5.2.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using git-lfs, load one of these modules using a module load command like:

              module load git-lfs/3.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

              The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using git, load one of these modules using a module load command like:

              module load git/2.42.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

              The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using glew, load one of these modules using a module load command like:

              module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

              The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using glib-networking, load one of these modules using a module load command like:

              module load glib-networking/2.72.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using glibc, load one of these modules using a module load command like:

              module load glibc/2.30-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

              The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using glog, load one of these modules using a module load command like:

              module load glog/0.6.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gmpy2, load one of these modules using a module load command like:

              module load gmpy2/2.1.5-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gmsh, load one of these modules using a module load command like:

              module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gnuplot, load one of these modules using a module load command like:

              module load gnuplot/5.4.8-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

              The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using goalign, load one of these modules using a module load command like:

              module load goalign/0.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gobff, load one of these modules using a module load command like:

              module load gobff/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gomkl, load one of these modules using a module load command like:

              module load gomkl/2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gompi, load one of these modules using a module load command like:

              module load gompi/2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gompic, load one of these modules using a module load command like:

              module load gompic/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

              The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using googletest, load one of these modules using a module load command like:

              module load googletest/1.13.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gotree, load one of these modules using a module load command like:

              module load gotree/0.4.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gperf, load one of these modules using a module load command like:

              module load gperf/3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gperftools, load one of these modules using a module load command like:

              module load gperftools/2.14-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gpustat, load one of these modules using a module load command like:

              module load gpustat/0.6.0-gcccuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using graphite2, load one of these modules using a module load command like:

              module load graphite2/1.3.14-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using graphviz-python, load one of these modules using a module load command like:

              module load graphviz-python/0.20.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

              The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using grid, load one of these modules using a module load command like:

              module load grid/20220610-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using groff, load one of these modules using a module load command like:

              module load groff/1.22.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using gzip, load one of these modules using a module load command like:

              module load gzip/1.13-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using h5netcdf, load one of these modules using a module load command like:

              module load h5netcdf/1.2.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using h5py, load one of these modules using a module load command like:

              module load h5py/3.9.0-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

              The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using harmony, load one of these modules using a module load command like:

              module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hatchling, load one of these modules using a module load command like:

              module load hatchling/1.18.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

              The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using help2man, load one of these modules using a module load command like:

              module load help2man/1.49.3-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hierfstat, load one of these modules using a module load command like:

              module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hifiasm, load one of these modules using a module load command like:

              module load hifiasm/0.19.7-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hiredis, load one of these modules using a module load command like:

              module load hiredis/1.0.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

              The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using histolab, load one of these modules using a module load command like:

              module load histolab/0.4.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hmmlearn, load one of these modules using a module load command like:

              module load hmmlearn/0.3.0-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

              The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using horton, load one of these modules using a module load command like:

              module load horton/2.1.1-intel-2020a-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

              The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using how_are_we_stranded_here, load one of these modules using a module load command like:

              module load how_are_we_stranded_here/1.0.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

              The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using humann, load one of these modules using a module load command like:

              module load humann/3.6-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hunspell, load one of these modules using a module load command like:

              module load hunspell/1.7.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hwloc, load one of these modules using a module load command like:

              module load hwloc/2.9.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hyperopt, load one of these modules using a module load command like:

              module load hyperopt/0.2.5-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using hypothesis, load one of these modules using a module load command like:

              module load hypothesis/6.90.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iccifort, load one of these modules using a module load command like:

              module load iccifort/2020.4.304\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iccifortcuda, load one of these modules using a module load command like:

              module load iccifortcuda/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ichorCNA, load one of these modules using a module load command like:

              module load ichorCNA/0.3.2-20191219-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

              The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using idemux, load one of these modules using a module load command like:

              module load idemux/0.1.6-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

              The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using igraph, load one of these modules using a module load command like:

              module load igraph/0.10.10-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

              The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using igvShiny, load one of these modules using a module load command like:

              module load igvShiny/20240112-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iibff, load one of these modules using a module load command like:

              module load iibff/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iimpi, load one of these modules using a module load command like:

              module load iimpi/2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iimpic, load one of these modules using a module load command like:

              module load iimpic/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imagecodecs, load one of these modules using a module load command like:

              module load imagecodecs/2022.9.26-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imageio, load one of these modules using a module load command like:

              module load imageio/2.22.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imbalanced-learn, load one of these modules using a module load command like:

              module load imbalanced-learn/0.10.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imgaug, load one of these modules using a module load command like:

              module load imgaug/0.4.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imkl-FFTW, load one of these modules using a module load command like:

              module load imkl-FFTW/2023.1.0-iimpi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imkl, load one of these modules using a module load command like:

              module load imkl/2023.1.0-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using impi, load one of these modules using a module load command like:

              module load impi/2021.9.0-intel-compilers-2023.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

              The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using imutils, load one of these modules using a module load command like:

              module load imutils/0.5.4-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

              The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using inferCNV, load one of these modules using a module load command like:

              module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using infercnvpy, load one of these modules using a module load command like:

              module load infercnvpy/0.4.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

              The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using inflection, load one of these modules using a module load command like:

              module load inflection/1.3.5-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intel-compilers, load one of these modules using a module load command like:

              module load intel-compilers/2023.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intel, load one of these modules using a module load command like:

              module load intel/2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intelcuda, load one of these modules using a module load command like:

              module load intelcuda/2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intervaltree-python, load one of these modules using a module load command like:

              module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intervaltree, load one of these modules using a module load command like:

              module load intervaltree/0.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

              The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using intltool, load one of these modules using a module load command like:

              module load intltool/0.51.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iodata, load one of these modules using a module load command like:

              module load iodata/1.0.0a2-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iomkl, load one of these modules using a module load command like:

              module load iomkl/2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using iompi, load one of these modules using a module load command like:

              module load iompi/2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using isoCirc, load one of these modules using a module load command like:

              module load isoCirc/1.0.4-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jax, load one of these modules using a module load command like:

              module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jbigkit, load one of these modules using a module load command like:

              module load jbigkit/2.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jemalloc, load one of these modules using a module load command like:

              module load jemalloc/5.3.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jobcli, load one of these modules using a module load command like:

              module load jobcli/0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using joypy, load one of these modules using a module load command like:

              module load joypy/0.2.4-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

              The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using json-c, load one of these modules using a module load command like:

              module load json-c/0.16-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

              module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jupyter-server-proxy, load one of these modules using a module load command like:

              module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jupyter-server, load one of these modules using a module load command like:

              module load jupyter-server/2.7.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using jxrlib, load one of these modules using a module load command like:

              module load jxrlib/1.1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kallisto, load one of these modules using a module load command like:

              module load kallisto/0.48.0-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kb-python, load one of these modules using a module load command like:

              module load kb-python/0.27.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kim-api, load one of these modules using a module load command like:

              module load kim-api/2.3.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kineto, load one of these modules using a module load command like:

              module load kineto/0.4.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kma, load one of these modules using a module load command like:

              module load kma/1.2.22-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

              The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using kneaddata, load one of these modules using a module load command like:

              module load kneaddata/0.12.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

              The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using krbalancing, load one of these modules using a module load command like:

              module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lancet, load one of these modules using a module load command like:

              module load lancet/1.1.0-iccifort-2019.5.281\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lavaan, load one of these modules using a module load command like:

              module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

              The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using leafcutter, load one of these modules using a module load command like:

              module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using legacy-job-wrappers, load one of these modules using a module load command like:

              module load legacy-job-wrappers/0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using leidenalg, load one of these modules using a module load command like:

              module load leidenalg/0.10.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lftp, load one of these modules using a module load command like:

              module load lftp/4.9.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libBigWig, load one of these modules using a module load command like:

              module load libBigWig/0.4.4-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libFLAME, load one of these modules using a module load command like:

              module load libFLAME/5.2.0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libGLU, load one of these modules using a module load command like:

              module load libGLU/9.0.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libRmath, load one of these modules using a module load command like:

              module load libRmath/4.1.0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libaec, load one of these modules using a module load command like:

              module load libaec/1.0.6-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libaio, load one of these modules using a module load command like:

              module load libaio/0.3.113-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libarchive, load one of these modules using a module load command like:

              module load libarchive/3.7.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libavif, load one of these modules using a module load command like:

              module load libavif/0.11.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libcdms, load one of these modules using a module load command like:

              module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libcerf, load one of these modules using a module load command like:

              module load libcerf/2.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libcint, load one of these modules using a module load command like:

              module load libcint/5.5.0-gfbf-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libdap, load one of these modules using a module load command like:

              module load libdap/3.20.7-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libde265, load one of these modules using a module load command like:

              module load libde265/1.0.11-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libdeflate, load one of these modules using a module load command like:

              module load libdeflate/1.19-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libdrm, load one of these modules using a module load command like:

              module load libdrm/2.4.115-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libdrs, load one of these modules using a module load command like:

              module load libdrs/3.1.2-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libepoxy, load one of these modules using a module load command like:

              module load libepoxy/1.5.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libev, load one of these modules using a module load command like:

              module load libev/4.33-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libevent, load one of these modules using a module load command like:

              module load libevent/2.1.12-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libfabric, load one of these modules using a module load command like:

              module load libfabric/1.19.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libffi, load one of these modules using a module load command like:

              module load libffi/3.4.4-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgcrypt, load one of these modules using a module load command like:

              module load libgcrypt/1.9.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgd, load one of these modules using a module load command like:

              module load libgd/2.3.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgeotiff, load one of these modules using a module load command like:

              module load libgeotiff/1.7.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgit2, load one of these modules using a module load command like:

              module load libgit2/1.7.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libglvnd, load one of these modules using a module load command like:

              module load libglvnd/1.6.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgpg-error, load one of these modules using a module load command like:

              module load libgpg-error/1.42-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libgpuarray, load one of these modules using a module load command like:

              module load libgpuarray/0.7.6-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libharu, load one of these modules using a module load command like:

              module load libharu/2.3.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libheif, load one of these modules using a module load command like:

              module load libheif/1.16.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libiconv, load one of these modules using a module load command like:

              module load libiconv/1.17-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libidn, load one of these modules using a module load command like:

              module load libidn/1.38-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libidn2, load one of these modules using a module load command like:

              module load libidn2/2.3.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libjpeg-turbo, load one of these modules using a module load command like:

              module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libjxl, load one of these modules using a module load command like:

              module load libjxl/0.8.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libleidenalg, load one of these modules using a module load command like:

              module load libleidenalg/0.11.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libmad, load one of these modules using a module load command like:

              module load libmad/0.15.1b-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libmatheval, load one of these modules using a module load command like:

              module load libmatheval/1.1.11-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libmaus2, load one of these modules using a module load command like:

              module load libmaus2/2.0.813-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libmypaint, load one of these modules using a module load command like:

              module load libmypaint/1.6.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libobjcryst, load one of these modules using a module load command like:

              module load libobjcryst/2021.1.2-intel-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libogg, load one of these modules using a module load command like:

              module load libogg/1.3.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libopus, load one of these modules using a module load command like:

              module load libopus/1.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libpciaccess, load one of these modules using a module load command like:

              module load libpciaccess/0.17-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libpng, load one of these modules using a module load command like:

              module load libpng/1.6.40-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libpsl, load one of these modules using a module load command like:

              module load libpsl/0.21.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libreadline, load one of these modules using a module load command like:

              module load libreadline/8.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using librosa, load one of these modules using a module load command like:

              module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using librsvg, load one of these modules using a module load command like:

              module load librsvg/2.51.2-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using librttopo, load one of these modules using a module load command like:

              module load librttopo/1.1.0-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libsigc++, load one of these modules using a module load command like:

              module load libsigc++/2.10.8-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libsndfile, load one of these modules using a module load command like:

              module load libsndfile/1.2.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libsodium, load one of these modules using a module load command like:

              module load libsodium/1.0.18-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libspatialindex, load one of these modules using a module load command like:

              module load libspatialindex/1.9.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libspatialite, load one of these modules using a module load command like:

              module load libspatialite/5.0.1-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libtasn1, load one of these modules using a module load command like:

              module load libtasn1/4.18.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libtirpc, load one of these modules using a module load command like:

              module load libtirpc/1.3.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libtool, load one of these modules using a module load command like:

              module load libtool/2.4.7-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libunistring, load one of these modules using a module load command like:

              module load libunistring/1.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libunwind, load one of these modules using a module load command like:

              module load libunwind/1.6.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libvdwxc, load one of these modules using a module load command like:

              module load libvdwxc/0.4.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libvorbis, load one of these modules using a module load command like:

              module load libvorbis/1.3.7-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libvori, load one of these modules using a module load command like:

              module load libvori/220621-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libwebp, load one of these modules using a module load command like:

              module load libwebp/1.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libwpe, load one of these modules using a module load command like:

              module load libwpe/1.13.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libxc, load one of these modules using a module load command like:

              module load libxc/6.2.2-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libxml++, load one of these modules using a module load command like:

              module load libxml++/2.42.1-GCC-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libxml2, load one of these modules using a module load command like:

              module load libxml2/2.11.5-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libxslt, load one of these modules using a module load command like:

              module load libxslt/1.1.38-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libxsmm, load one of these modules using a module load command like:

              module load libxsmm/1.17-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libyaml, load one of these modules using a module load command like:

              module load libyaml/0.2.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using libzip, load one of these modules using a module load command like:

              module load libzip/1.7.3-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lifelines, load one of these modules using a module load command like:

              module load lifelines/0.27.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

              The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using likwid, load one of these modules using a module load command like:

              module load likwid/5.0.1-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lmoments3, load one of these modules using a module load command like:

              module load lmoments3/1.0.6-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

              The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using longread_umi, load one of these modules using a module load command like:

              module load longread_umi/0.3.2-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using loomR, load one of these modules using a module load command like:

              module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using loompy, load one of these modules using a module load command like:

              module load loompy/3.0.7-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

              The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using louvain, load one of these modules using a module load command like:

              module load louvain/0.8.0-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lpsolve, load one of these modules using a module load command like:

              module load lpsolve/5.5.2.11-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lxml, load one of these modules using a module load command like:

              module load lxml/4.9.3-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using lz4, load one of these modules using a module load command like:

              module load lz4/1.9.4-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

              The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using maeparser, load one of these modules using a module load command like:

              module load maeparser/1.3.0-iimpi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

              The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using magma, load one of these modules using a module load command like:

              module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mahotas, load one of these modules using a module load command like:

              module load mahotas/1.4.13-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

              The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using make, load one of these modules using a module load command like:

              module load make/4.4.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

              The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using makedepend, load one of these modules using a module load command like:

              module load makedepend/1.0.6-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using makeinfo, load one of these modules using a module load command like:

              module load makeinfo/7.0.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using manta, load one of these modules using a module load command like:

              module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mapDamage, load one of these modules using a module load command like:

              module load mapDamage/2.2.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using matplotlib, load one of these modules using a module load command like:

              module load matplotlib/3.7.2-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

              The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using maturin, load one of these modules using a module load command like:

              module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mauveAligner, load one of these modules using a module load command like:

              module load mauveAligner/4736-gompi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

              The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using maze, load one of these modules using a module load command like:

              module load maze/20170124-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mcu, load one of these modules using a module load command like:

              module load mcu/2021-04-06-gomkl-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using medImgProc, load one of these modules using a module load command like:

              module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

              The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using medaka, load one of these modules using a module load command like:

              module load medaka/1.11.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using meshalyzer, load one of these modules using a module load command like:

              module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

              The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using meshtool, load one of these modules using a module load command like:

              module load meshtool/16-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using meson-python, load one of these modules using a module load command like:

              module load meson-python/0.15.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using metaWRAP, load one of these modules using a module load command like:

              module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using metaerg, load one of these modules using a module load command like:

              module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using methylpy, load one of these modules using a module load command like:

              module load methylpy/1.2.9-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mgen, load one of these modules using a module load command like:

              module load mgen/1.2.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mgltools, load one of these modules using a module load command like:

              module load mgltools/1.5.7\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mhcnuggets, load one of these modules using a module load command like:

              module load mhcnuggets/2.3-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using microctools, load one of these modules using a module load command like:

              module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

              The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using minibar, load one of these modules using a module load command like:

              module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using minimap2, load one of these modules using a module load command like:

              module load minimap2/2.26-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using minizip, load one of these modules using a module load command like:

              module load minizip/1.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

              The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using misha, load one of these modules using a module load command like:

              module load misha/4.0.10-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mkl-service, load one of these modules using a module load command like:

              module load mkl-service/2.3.0-intel-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mm-common, load one of these modules using a module load command like:

              module load mm-common/1.0.4-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

              The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using molmod, load one of these modules using a module load command like:

              module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mongolite, load one of these modules using a module load command like:

              module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

              The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using monitor, load one of these modules using a module load command like:

              module load monitor/1.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mosdepth, load one of these modules using a module load command like:

              module load mosdepth/0.3.3-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

              The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using motionSegmentation, load one of these modules using a module load command like:

              module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mpath, load one of these modules using a module load command like:

              module load mpath/1.1.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mpi4py, load one of these modules using a module load command like:

              module load mpi4py/3.1.4-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mrcfile, load one of these modules using a module load command like:

              module load mrcfile/1.3.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

              The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using muParser, load one of these modules using a module load command like:

              module load muParser/2.3.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mujoco-py, load one of these modules using a module load command like:

              module load mujoco-py/2.3.7-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

              The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using multichoose, load one of these modules using a module load command like:

              module load multichoose/1.0.3-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mygene, load one of these modules using a module load command like:

              module load mygene/3.2.2-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

              The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using mysqlclient, load one of these modules using a module load command like:

              module load mysqlclient/2.1.1-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

              The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using n2v, load one of these modules using a module load command like:

              module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nanocompore, load one of these modules using a module load command like:

              module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nanofilt, load one of these modules using a module load command like:

              module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nanoget, load one of these modules using a module load command like:

              module load nanoget/1.18.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nanomath, load one of these modules using a module load command like:

              module load nanomath/1.3.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nanopolish, load one of these modules using a module load command like:

              module load nanopolish/0.14.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

              The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using napari, load one of these modules using a module load command like:

              module load napari/0.4.18-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ncbi-vdb, load one of these modules using a module load command like:

              module load ncbi-vdb/3.0.2-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ncdf4, load one of these modules using a module load command like:

              module load ncdf4/1.17-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ncolor, load one of these modules using a module load command like:

              module load ncolor/1.2.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ncurses, load one of these modules using a module load command like:

              module load ncurses/6.4-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ncview, load one of these modules using a module load command like:

              module load ncview/2.1.7-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

              The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using netCDF-C++4, load one of these modules using a module load command like:

              module load netCDF-C++4/4.3.1-iimpi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

              The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using netCDF-Fortran, load one of these modules using a module load command like:

              module load netCDF-Fortran/4.6.0-iimpi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using netCDF, load one of these modules using a module load command like:

              module load netCDF/4.9.2-gompi-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using netcdf4-python, load one of these modules using a module load command like:

              module load netcdf4-python/1.6.4-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nettle, load one of these modules using a module load command like:

              module load nettle/3.9.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

              The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using networkx, load one of these modules using a module load command like:

              module load networkx/3.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nghttp2, load one of these modules using a module load command like:

              module load nghttp2/1.48.0-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nghttp3, load one of these modules using a module load command like:

              module load nghttp3/0.6.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nglview, load one of these modules using a module load command like:

              module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ngtcp2, load one of these modules using a module load command like:

              module load ngtcp2/0.7.0-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nichenetr, load one of these modules using a module load command like:

              module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nlohmann_json, load one of these modules using a module load command like:

              module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nnU-Net, load one of these modules using a module load command like:

              module load nnU-Net/1.7.0-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nodejs, load one of these modules using a module load command like:

              module load nodejs/18.17.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

              The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using noise, load one of these modules using a module load command like:

              module load noise/1.2.2-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nsync, load one of these modules using a module load command like:

              module load nsync/1.26.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ntCard, load one of these modules using a module load command like:

              module load ntCard/1.2.2-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

              The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using num2words, load one of these modules using a module load command like:

              module load num2words/0.5.10-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using numactl, load one of these modules using a module load command like:

              module load numactl/2.0.16-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

              The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using numba, load one of these modules using a module load command like:

              module load numba/0.58.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

              The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using numexpr, load one of these modules using a module load command like:

              module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

              The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using nvtop, load one of these modules using a module load command like:

              module load nvtop/1.2.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using olaFlow, load one of these modules using a module load command like:

              module load olaFlow/20210820-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

              The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using olego, load one of these modules using a module load command like:

              module load olego/1.1.9-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

              The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using onedrive, load one of these modules using a module load command like:

              module load onedrive/2.4.21-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ont-fast5-api, load one of these modules using a module load command like:

              module load ont-fast5-api/4.1.1-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

              The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using openCARP, load one of these modules using a module load command like:

              module load openCARP/6.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

              The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using openkim-models, load one of these modules using a module load command like:

              module load openkim-models/20190725-intel-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using openpyxl, load one of these modules using a module load command like:

              module load openpyxl/3.1.2-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using openslide-python, load one of these modules using a module load command like:

              module load openslide-python/1.2.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

              The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using orca, load one of these modules using a module load command like:

              module load orca/1.3.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using p11-kit, load one of these modules using a module load command like:

              module load p11-kit/0.24.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

              The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using p4est, load one of these modules using a module load command like:

              module load p4est/2.8-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using p7zip, load one of these modules using a module load command like:

              module load p7zip/17.03-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pIRS, load one of these modules using a module load command like:

              module load pIRS/2.0.2-gompi-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

              The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using packmol, load one of these modules using a module load command like:

              module load packmol/v20.2.2-iccifort-2020.1.217\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pagmo, load one of these modules using a module load command like:

              module load pagmo/2.17.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pairtools, load one of these modules using a module load command like:

              module load pairtools/0.3.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using panaroo, load one of these modules using a module load command like:

              module load panaroo/1.2.8-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pandas, load one of these modules using a module load command like:

              module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

              The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using parallel-fastq-dump, load one of these modules using a module load command like:

              module load parallel-fastq-dump/0.6.7-gompi-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

              The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using parallel, load one of these modules using a module load command like:

              module load parallel/20230722-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

              The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using parasail, load one of these modules using a module load command like:

              module load parasail/2.6-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using patchelf, load one of these modules using a module load command like:

              module load patchelf/0.18.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pauvre, load one of these modules using a module load command like:

              module load pauvre/0.1924-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pblat, load one of these modules using a module load command like:

              module load pblat/2.5.1-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pdsh, load one of these modules using a module load command like:

              module load pdsh/2.34-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

              The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using peakdetect, load one of these modules using a module load command like:

              module load peakdetect/1.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using petsc4py, load one of these modules using a module load command like:

              module load petsc4py/3.17.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pftoolsV3, load one of these modules using a module load command like:

              module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using phonemizer, load one of these modules using a module load command like:

              module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using phonopy, load one of these modules using a module load command like:

              module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using phototonic, load one of these modules using a module load command like:

              module load phototonic/2.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

              The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using phyluce, load one of these modules using a module load command like:

              module load phyluce/1.7.3-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

              The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using picard, load one of these modules using a module load command like:

              module load picard/2.25.1-Java-11\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pigz, load one of these modules using a module load command like:

              module load pigz/2.8-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pixman, load one of these modules using a module load command like:

              module load pixman/0.42.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pkg-config, load one of these modules using a module load command like:

              module load pkg-config/0.29.2-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pkgconf, load one of these modules using a module load command like:

              module load pkgconf/2.0.3-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pkgconfig, load one of these modules using a module load command like:

              module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

              The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using plot1cell, load one of these modules using a module load command like:

              module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

              The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using plotly-orca, load one of these modules using a module load command like:

              module load plotly-orca/1.3.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using plotly.py, load one of these modules using a module load command like:

              module load plotly.py/5.16.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pocl, load one of these modules using a module load command like:

              module load pocl/4.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pod5-file-format, load one of these modules using a module load command like:

              module load pod5-file-format/0.1.8-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

              The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using poetry, load one of these modules using a module load command like:

              module load poetry/1.7.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

              The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using polars, load one of these modules using a module load command like:

              module load polars/0.15.6-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using poppler, load one of these modules using a module load command like:

              module load poppler/23.09.0-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using popscle, load one of these modules using a module load command like:

              module load popscle/0.1-beta-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

              The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using porefoam, load one of these modules using a module load command like:

              module load porefoam/2021-09-21-foss-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

              The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using powerlaw, load one of these modules using a module load command like:

              module load powerlaw/1.5-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pplacer, load one of these modules using a module load command like:

              module load pplacer/1.1.alpha19\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using preseq, load one of these modules using a module load command like:

              module load preseq/3.2.0-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using presto, load one of these modules using a module load command like:

              module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pretty-yaml, load one of these modules using a module load command like:

              module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

              The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using prodigal, load one of these modules using a module load command like:

              module load prodigal/2.6.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

              The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using prokka, load one of these modules using a module load command like:

              module load prokka/1.14.5-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using protobuf-python, load one of these modules using a module load command like:

              module load protobuf-python/4.24.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using protobuf, load one of these modules using a module load command like:

              module load protobuf/24.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

              The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using psutil, load one of these modules using a module load command like:

              module load psutil/5.9.5-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using psycopg2, load one of these modules using a module load command like:

              module load psycopg2/2.9.6-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pugixml, load one of these modules using a module load command like:

              module load pugixml/1.12.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pullseq, load one of these modules using a module load command like:

              module load pullseq/1.0.2-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

              The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using purge_dups, load one of these modules using a module load command like:

              module load purge_dups/1.2.5-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pv, load one of these modules using a module load command like:

              module load pv/1.7.24-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using py-cpuinfo, load one of these modules using a module load command like:

              module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

              The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using py3Dmol, load one of these modules using a module load command like:

              module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyBigWig, load one of these modules using a module load command like:

              module load pyBigWig/0.3.18-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyEGA3, load one of these modules using a module load command like:

              module load pyEGA3/5.0.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyGenomeTracks, load one of these modules using a module load command like:

              module load pyGenomeTracks/3.8-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pySCENIC, load one of these modules using a module load command like:

              module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyWannier90, load one of these modules using a module load command like:

              module load pyWannier90/2021-12-07-gomkl-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pybedtools, load one of these modules using a module load command like:

              module load pybedtools/0.9.0-GCC-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pybind11, load one of these modules using a module load command like:

              module load pybind11/2.11.1-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pycocotools, load one of these modules using a module load command like:

              module load pycocotools/2.0.4-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pycodestyle, load one of these modules using a module load command like:

              module load pycodestyle/2.11.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pydantic, load one of these modules using a module load command like:

              module load pydantic/2.5.3-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pydicom, load one of these modules using a module load command like:

              module load pydicom/2.3.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pydot, load one of these modules using a module load command like:

              module load pydot/1.4.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyfaidx, load one of these modules using a module load command like:

              module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyfasta, load one of these modules using a module load command like:

              module load pyfasta/0.5.2-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pygmo, load one of these modules using a module load command like:

              module load pygmo/2.16.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pygraphviz, load one of these modules using a module load command like:

              module load pygraphviz/1.11-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyiron, load one of these modules using a module load command like:

              module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pymatgen, load one of these modules using a module load command like:

              module load pymatgen/2022.9.21-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pymbar, load one of these modules using a module load command like:

              module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pymca, load one of these modules using a module load command like:

              module load pymca/5.6.3-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyobjcryst, load one of these modules using a module load command like:

              module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyodbc, load one of these modules using a module load command like:

              module load pyodbc/4.0.39-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyparsing, load one of these modules using a module load command like:

              module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyproj, load one of these modules using a module load command like:

              module load pyproj/3.6.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyro-api, load one of these modules using a module load command like:

              module load pyro-api/0.1.2-fosscuda-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyro-ppl, load one of these modules using a module load command like:

              module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pysamstats, load one of these modules using a module load command like:

              module load pysamstats/1.1.2-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pysndfx, load one of these modules using a module load command like:

              module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pyspoa, load one of these modules using a module load command like:

              module load pyspoa/0.0.9-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pytest-flakefinder, load one of these modules using a module load command like:

              module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pytest-rerunfailures, load one of these modules using a module load command like:

              module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pytest-shard, load one of these modules using a module load command like:

              module load pytest-shard/0.1.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pytest-xdist, load one of these modules using a module load command like:

              module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pytest, load one of these modules using a module load command like:

              module load pytest/7.4.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pythermalcomfort, load one of these modules using a module load command like:

              module load pythermalcomfort/2.8.10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-Levenshtein, load one of these modules using a module load command like:

              module load python-Levenshtein/0.12.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-igraph, load one of these modules using a module load command like:

              module load python-igraph/0.11.4-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-irodsclient, load one of these modules using a module load command like:

              module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-isal, load one of these modules using a module load command like:

              module load python-isal/1.1.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-louvain, load one of these modules using a module load command like:

              module load python-louvain/0.16-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-parasail, load one of these modules using a module load command like:

              module load python-parasail/1.3.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-telegram-bot, load one of these modules using a module load command like:

              module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

              The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using python-weka-wrapper3, load one of these modules using a module load command like:

              module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

              The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using pythran, load one of these modules using a module load command like:

              module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

              The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using qcat, load one of these modules using a module load command like:

              module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using qnorm, load one of these modules using a module load command like:

              module load qnorm/0.8.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rMATS-turbo, load one of these modules using a module load command like:

              module load rMATS-turbo/4.1.1-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

              The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using radian, load one of these modules using a module load command like:

              module load radian/0.6.9-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rasterio, load one of these modules using a module load command like:

              module load rasterio/1.3.8-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rasterstats, load one of these modules using a module load command like:

              module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rclone, load one of these modules using a module load command like:

              module load rclone/1.65.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

              The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using re2c, load one of these modules using a module load command like:

              module load re2c/3.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using redis-py, load one of these modules using a module load command like:

              module load redis-py/4.5.1-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

              The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using regionmask, load one of these modules using a module load command like:

              module load regionmask/0.10.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

              The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using request, load one of these modules using a module load command like:

              module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rethinking, load one of these modules using a module load command like:

              module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rgdal, load one of these modules using a module load command like:

              module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rgeos, load one of these modules using a module load command like:

              module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rickflow, load one of these modules using a module load command like:

              module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rioxarray, load one of these modules using a module load command like:

              module load rioxarray/0.11.1-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rjags, load one of these modules using a module load command like:

              module load rjags/4-13-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rmarkdown, load one of these modules using a module load command like:

              module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rpy2, load one of these modules using a module load command like:

              module load rpy2/3.5.10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rstanarm, load one of these modules using a module load command like:

              module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using rstudio, load one of these modules using a module load command like:

              module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ruamel.yaml, load one of these modules using a module load command like:

              module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

              The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using ruffus, load one of these modules using a module load command like:

              module load ruffus/2.8.4-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using s3fs, load one of these modules using a module load command like:

              module load s3fs/2023.12.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

              The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using samblaster, load one of these modules using a module load command like:

              module load samblaster/0.1.26-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

              The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using samclip, load one of these modules using a module load command like:

              module load samclip/0.4.0-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sansa, load one of these modules using a module load command like:

              module load sansa/0.0.7-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sbt, load one of these modules using a module load command like:

              module load sbt/1.3.13-Java-1.8\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scArches, load one of these modules using a module load command like:

              module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scCODA, load one of these modules using a module load command like:

              module load scCODA/0.1.9-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scGeneFit, load one of these modules using a module load command like:

              module load scGeneFit/1.0.2-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scHiCExplorer, load one of these modules using a module load command like:

              module load scHiCExplorer/7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scPred, load one of these modules using a module load command like:

              module load scPred/1.9.2-foss-2021b-R-4.1.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scVelo, load one of these modules using a module load command like:

              module load scVelo/0.2.5-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scanpy, load one of these modules using a module load command like:

              module load scanpy/1.9.8-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sceasy, load one of these modules using a module load command like:

              module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scib-metrics, load one of these modules using a module load command like:

              module load scib-metrics/0.3.3-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scib, load one of these modules using a module load command like:

              module load scib/1.1.3-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-bio, load one of these modules using a module load command like:

              module load scikit-bio/0.5.7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-build, load one of these modules using a module load command like:

              module load scikit-build/0.17.6-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-extremes, load one of these modules using a module load command like:

              module load scikit-extremes/2022.4.10-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-image, load one of these modules using a module load command like:

              module load scikit-image/0.19.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-learn, load one of these modules using a module load command like:

              module load scikit-learn/1.4.0-gfbf-2023b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-misc, load one of these modules using a module load command like:

              module load scikit-misc/0.1.4-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scikit-optimize, load one of these modules using a module load command like:

              module load scikit-optimize/0.9.0-foss-2021a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scipy, load one of these modules using a module load command like:

              module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scrublet, load one of these modules using a module load command like:

              module load scrublet/0.2.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using scvi-tools, load one of these modules using a module load command like:

              module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using segemehl, load one of these modules using a module load command like:

              module load segemehl/0.3.4-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

              The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using segmentation-models, load one of these modules using a module load command like:

              module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

              The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using semla, load one of these modules using a module load command like:

              module load semla/1.1.6-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using seqtk, load one of these modules using a module load command like:

              module load seqtk/1.4-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

              The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using setuptools-rust, load one of these modules using a module load command like:

              module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using setuptools, load one of these modules using a module load command like:

              module load setuptools/64.0.3-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sf, load one of these modules using a module load command like:

              module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

              The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using shovill, load one of these modules using a module load command like:

              module load shovill/1.1.0-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

              The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using silhouetteRank, load one of these modules using a module load command like:

              module load silhouetteRank/1.0.5.13-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

              The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using silx, load one of these modules using a module load command like:

              module load silx/0.14.0-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

              The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using slepc4py, load one of these modules using a module load command like:

              module load slepc4py/3.17.2-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using slow5tools, load one of these modules using a module load command like:

              module load slow5tools/0.4.0-gompi-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using slurm-drmaa, load one of these modules using a module load command like:

              module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using smfishHmrf, load one of these modules using a module load command like:

              module load smfishHmrf/1.3.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

              The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using smithwaterman, load one of these modules using a module load command like:

              module load smithwaterman/20160702-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using smooth-topk, load one of these modules using a module load command like:

              module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

              The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using snakemake, load one of these modules using a module load command like:

              module load snakemake/8.4.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using snappy, load one of these modules using a module load command like:

              module load snappy/1.1.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using snippy, load one of these modules using a module load command like:

              module load snippy/4.6.0-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

              The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using snp-sites, load one of these modules using a module load command like:

              module load snp-sites/2.5.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using snpEff, load one of these modules using a module load command like:

              module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using solo, load one of these modules using a module load command like:

              module load solo/1.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sonic, load one of these modules using a module load command like:

              module load sonic/20180202-gompi-2020a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using spaCy, load one of these modules using a module load command like:

              module load spaCy/3.4.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

              The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using spaln, load one of these modules using a module load command like:

              module load spaln/2.4.13f-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sparse-neighbors-search, load one of these modules using a module load command like:

              module load sparse-neighbors-search/0.7-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sparsehash, load one of these modules using a module load command like:

              module load sparsehash/2.0.4-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

              The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using spatialreg, load one of these modules using a module load command like:

              module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

              The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using speech_tools, load one of these modules using a module load command like:

              module load speech_tools/2.5.0-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using spglib-python, load one of these modules using a module load command like:

              module load spglib-python/2.0.0-intel-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

              The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using spoa, load one of these modules using a module load command like:

              module load spoa/4.0.7-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

              The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using stardist, load one of these modules using a module load command like:

              module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

              The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using stars, load one of these modules using a module load command like:

              module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

              The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using statsmodels, load one of these modules using a module load command like:

              module load statsmodels/0.14.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

              The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using suave, load one of these modules using a module load command like:

              module load suave/20160529-foss-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

              The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using supernova, load one of these modules using a module load command like:

              module load supernova/2.0.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

              The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using swissknife, load one of these modules using a module load command like:

              module load swissknife/1.80-GCCcore-8.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

              The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using sympy, load one of these modules using a module load command like:

              module load sympy/1.12-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

              The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using synapseclient, load one of these modules using a module load command like:

              module load synapseclient/3.0.0-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

              The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using synthcity, load one of these modules using a module load command like:

              module load synthcity/0.2.4-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tMAE, load one of these modules using a module load command like:

              module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tabixpp, load one of these modules using a module load command like:

              module load tabixpp/1.1.2-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

              The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using task-spooler, load one of these modules using a module load command like:

              module load task-spooler/1.0.2-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

              The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using taxator-tk, load one of these modules using a module load command like:

              module load taxator-tk/1.3.3-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tbb, load one of these modules using a module load command like:

              module load tbb/2021.5.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tbl2asn, load one of these modules using a module load command like:

              module load tbl2asn/20220427-linux64\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tcsh, load one of these modules using a module load command like:

              module load tcsh/6.24.10-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tensorboard, load one of these modules using a module load command like:

              module load tensorboard/2.10.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tensorboardX, load one of these modules using a module load command like:

              module load tensorboardX/2.6.2.2-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tensorflow-probability, load one of these modules using a module load command like:

              module load tensorflow-probability/0.19.0-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using texinfo, load one of these modules using a module load command like:

              module load texinfo/6.7-GCCcore-9.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

              The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using texlive, load one of these modules using a module load command like:

              module load texlive/20230313-GCC-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tidymodels, load one of these modules using a module load command like:

              module load tidymodels/1.1.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

              The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using time, load one of these modules using a module load command like:

              module load time/1.9-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using timm, load one of these modules using a module load command like:

              module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tmux, load one of these modules using a module load command like:

              module load tmux/3.2a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tokenizers, load one of these modules using a module load command like:

              module load tokenizers/0.13.3-GCCcore-12.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

              The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using torchaudio, load one of these modules using a module load command like:

              module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

              The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using torchtext, load one of these modules using a module load command like:

              module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

              The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using torchvf, load one of these modules using a module load command like:

              module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

              The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using torchvision, load one of these modules using a module load command like:

              module load torchvision/0.14.1-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tornado, load one of these modules using a module load command like:

              module load tornado/6.3.2-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tqdm, load one of these modules using a module load command like:

              module load tqdm/4.66.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

              The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using treatSens, load one of these modules using a module load command like:

              module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

              The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using trimAl, load one of these modules using a module load command like:

              module load trimAl/1.4.1-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

              The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using tsne, load one of these modules using a module load command like:

              module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

              The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using typing-extensions, load one of these modules using a module load command like:

              module load typing-extensions/4.9.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

              The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using umap-learn, load one of these modules using a module load command like:

              module load umap-learn/0.5.5-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

              The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using umi4cPackage, load one of these modules using a module load command like:

              module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

              The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using uncertainties, load one of these modules using a module load command like:

              module load uncertainties/3.1.7-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

              The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using uncertainty-calibration, load one of these modules using a module load command like:

              module load uncertainty-calibration/0.0.9-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

              The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using unimap, load one of these modules using a module load command like:

              module load unimap/0.1-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

              The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using unixODBC, load one of these modules using a module load command like:

              module load unixODBC/2.3.11-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

              The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using utf8proc, load one of these modules using a module load command like:

              module load utf8proc/2.8.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

              The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using util-linux, load one of these modules using a module load command like:

              module load util-linux/2.39-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vConTACT2, load one of these modules using a module load command like:

              module load vConTACT2/0.11.3-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vaeda, load one of these modules using a module load command like:

              module load vaeda/0.0.30-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vbz_compression, load one of these modules using a module load command like:

              module load vbz_compression/1.0.1-gompi-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vcflib, load one of these modules using a module load command like:

              module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using velocyto, load one of these modules using a module load command like:

              module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

              The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using virtualenv, load one of these modules using a module load command like:

              module load virtualenv/20.24.6-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vispr, load one of these modules using a module load command like:

              module load vispr/0.4.14-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vitessce-python, load one of these modules using a module load command like:

              module load vitessce-python/20230222-foss-2022a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vitessceR, load one of these modules using a module load command like:

              module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vsc-mympirun, load one of these modules using a module load command like:

              module load vsc-mympirun/5.3.1\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using vt, load one of these modules using a module load command like:

              module load vt/0.57721-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wandb, load one of these modules using a module load command like:

              module load wandb/0.13.6-GCC-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

              The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using waves2Foam, load one of these modules using a module load command like:

              module load waves2Foam/20200703-foss-2019b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wget, load one of these modules using a module load command like:

              module load wget/1.21.1-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wgsim, load one of these modules using a module load command like:

              module load wgsim/20111017-GCC-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

              The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using worker, load one of these modules using a module load command like:

              module load worker/1.6.13-iimpi-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wpebackend-fdo, load one of these modules using a module load command like:

              module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wrapt, load one of these modules using a module load command like:

              module load wrapt/1.15.0-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wrf-python, load one of these modules using a module load command like:

              module load wrf-python/1.3.4.1-foss-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wtdbg2, load one of these modules using a module load command like:

              module load wtdbg2/2.5-GCCcore-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wxPython, load one of these modules using a module load command like:

              module load wxPython/4.2.0-foss-2021b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

              The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using wxWidgets, load one of these modules using a module load command like:

              module load wxWidgets/3.2.0-GCC-11.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

              The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using x264, load one of these modules using a module load command like:

              module load x264/20230226-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

              The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using x265, load one of these modules using a module load command like:

              module load x265/3.5-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xESMF, load one of these modules using a module load command like:

              module load xESMF/0.3.0-intel-2020b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xarray, load one of these modules using a module load command like:

              module load xarray/2023.9.0-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xorg-macros, load one of these modules using a module load command like:

              module load xorg-macros/1.20.0-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xprop, load one of these modules using a module load command like:

              module load xprop/1.2.5-GCCcore-10.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xproto, load one of these modules using a module load command like:

              module load xproto/7.0.31-GCCcore-10.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xtb, load one of these modules using a module load command like:

              module load xtb/6.6.1-gfbf-2023a\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using xxd, load one of these modules using a module load command like:

              module load xxd/9.0.2112-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

              The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using yaff, load one of these modules using a module load command like:

              module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using yaml-cpp, load one of these modules using a module load command like:

              module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zUMIs, load one of these modules using a module load command like:

              module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zarr, load one of these modules using a module load command like:

              module load zarr/2.16.0-foss-2022b\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zfp, load one of these modules using a module load command like:

              module load zfp/1.0.0-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zlib-ng, load one of these modules using a module load command like:

              module load zlib-ng/2.0.7-GCCcore-11.3.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zlib, load one of these modules using a module load command like:

              module load zlib/1.2.13-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

              The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

              To start using zstd, load one of these modules using a module load command like:

              module load zstd/1.5.5-GCCcore-13.2.0\n

              (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

              accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
              module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

              Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

              module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
              "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

              Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

              "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
              • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

              • Includes researchers from all VSC partners.

              • Usage is free of charge.

              • Use your account credentials at your affiliated university to request a VSC-id and connect.

              • See Getting a HPC Account.

              "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
              • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

              • HPC-UGent promotes using the Tier1 services of the VSC.

              • HPC-UGent can act as a liason.

              "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
              • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

              • Same conditions apply, free of charge for all Flemish university associations.

              • Use your university account credentials to request a VSC-id and connect.

              "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

              Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

              "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
              • VSC has a dedicated service geared towards industry.

              • HPC-UGent can act as a liason to the VSC services.

              "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
              • Interested in collaborating in supercomputing with a UGent research group?

              • We can help you look for a collaborative partner. Contact hpc@ugent.be.

              "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
              $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

              Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

              $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

              As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

              "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
              module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

              Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

              module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
              "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

              (more info soon)

              "}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

              Use the menu on the left to navigate, or use the search box on the top right.

              You are viewing documentation intended for people using Windows.

              Use the OS dropdown in the top bar to switch to a different operating system.

              Quick links

              • Getting Started | Getting Access
              • Recording of HPC-UGent intro
              • Linux Tutorial
              • Hardware overview
              • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
              • FAQ | Troubleshooting | Best practices | Known issues

              If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

              If you still have any questions, you can contact the HPC-UGent team.

              "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

              New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

              If you want to use software that's not yet installed on the HPC, send us a software installation request.

              Overview of HPC-UGent Tier-2 infrastructure

              "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

              An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

              See also: Running batch jobs.

              "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

              When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

              Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

              Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

              If the package or library you want is not available, send us a software installation request.

              "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

              Modules each come with a suffix that describes the toolchain used to install them.

              Examples:

              • AlphaFold/2.2.2-foss-2021a

              • tqdm/4.61.2-GCCcore-10.3.0

              • Python/3.9.5-GCCcore-10.3.0

              • matplotlib/3.4.2-foss-2021a

              Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

              The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

              You can use module avail [search_text] to see which versions on which toolchains are available to use.

              If you need something that's not available yet, you can request it through a software installation request.

              It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

              "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

              When incompatible modules are loaded, you might encounter an error like this:

              Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

              You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

              Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

              An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

              See also: How do I choose the job modules?

              "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

              The 72 hour walltime limit will not be extended. However, you can work around this barrier:

              • Check that all available resources are being used. See also:
                • How many cores/nodes should I request?.
                • My job is slow.
                • My job isn't using any GPUs.
              • Use a faster cluster.
              • Divide the job into more parallel processes.
              • Divide the job into shorter processes, which you can submit as separate jobs.
              • Use the built-in checkpointing of your software.
              "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

              Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

              When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

              Try requesting a bit more memory than your proportional share, and see if that solves the issue.

              See also: Specifying memory requirements.

              "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

              When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

              It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

              See also: Running interactive jobs.

              "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

              Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

              Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

              See also: HPC-UGent GPU clusters.

              "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

              There are a few possible causes why a job can perform worse than expected.

              Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

              Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

              Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

              "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

              Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

              To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

              See also: Multi core jobs/Parallel Computing and Mympirun.

              "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

              For example, we have a simple script (./hello.sh):

              #!/bin/bash \necho \"hello world\"\n

              And we run it like mympirun ./hello.sh --output output.txt.

              To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

              mympirun --output output.txt ./hello.sh\n
              "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

              See the explanation about how jobs get prioritized in When will my job start.

              "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

              When trying to create files, errors like this can occur:

              No space left on device\n

              The error \"No space left on device\" can mean two different things:

              • all available storage quota on the file system in question has been used;
              • the inode limit has been reached on that file system.

              An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

              Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

              If the problem persists, feel free to contact support.

              "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

              NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

              See https://helpdesk.ugent.be/account/en/regels.php.

              If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

              "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

              Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

              $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

              For more information about chmod or setfacl, see Linux tutorial.

              "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

              Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

              "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

              Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

              If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

              "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

              On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

              MacOS & Linux (on Windows, only the second part is shown):

              @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

              Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

              "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

              A Virtual Organisation consists of a number of members and moderators. A moderator can:

              • Manage the VO members (but can't access/remove their data on the system).

              • See how much storage each member has used, and set limits per member.

              • Request additional storage for the VO.

              One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

              See also: Virtual Organisations.

              "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

              After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

              See also: Your UGent home drive and shares.

              "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

              Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

              du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

              The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

              The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

              "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

              By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

              You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

              "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

              When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

              sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

              A lot of tasks can be performed without sudo, including installing software in your own account.

              Installing software

              • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
              • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
              "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

              Who can I contact?

              • General questions regarding HPC-UGent and VSC: hpc@ugent.be

              • HPC-UGent Tier-2: hpc@ugent.be

              • VSC Tier-1 compute: compute@vscentrum.be

              • VSC Tier-1 cloud: cloud@vscentrum.be

              "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

              Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

              "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

              The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

              "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

              Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

              module load hod\n
              "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

              The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

              As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

              For example, this will work as expected:

              $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

              Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

              "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

              The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

              $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

              By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

              If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

              Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

              "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

              After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

              These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

              You should occasionally clean this up using hod clean:

              $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
              Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

              "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

              If you have any questions, or are experiencing problems using HOD, you have a couple of options:

              • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

              • Contact the HPC-UGent team via hpc@ugent.be

              • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

              "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

              Note

              To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

              Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

              "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

              The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

              Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

              Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

              "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

              Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

              To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

              $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

              After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

              To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

              First, we copy the magicsquare.m example that comes with MATLAB to example.m:

              cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

              To compile a MATLAB program, use mcc -mv:

              mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
              "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

              To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

              It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

              For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

              "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

              If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

              export _JAVA_OPTIONS=\"-Xmx64M\"\n

              The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

              Another possible issue is that the heap size is too small. This could result in errors like:

              Error: Out of memory\n

              A possible solution to this is by setting the maximum heap size to be bigger:

              export _JAVA_OPTIONS=\"-Xmx512M\"\n
              "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

              MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

              The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

              You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

              parpool.m
              % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

              See also the parpool documentation.

              "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

              Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

              MATLAB_LOG_DIR=<OUTPUT_DIR>\n

              where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

              # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

              You should remove the directory at the end of your job script:

              rm -rf $MATLAB_LOG_DIR\n
              "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

              When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

              The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

              export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

              So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

              "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

              All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

              jobscript.sh
              #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
              "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

              VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

              Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

              Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

              "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

              First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

              $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

              When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

              Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

              It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

              "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

              You can get a list of running VNC servers on a node with

              $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

              This only displays the running VNC servers on the login node you run the command on.

              To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

              $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

              This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

              "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

              The VNC server runs on a (in the example above, on gligar07.gastly.os).

              In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

              Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

              To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

              The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

              "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

              The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

              The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

              So, in our running example, both the source and destination ports are 5906.

              "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

              In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

              If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

              In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

              To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

              Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

              In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

              We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

              "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

              First, we will set up the SSH tunnel from our workstation to .

              Use the settings specified in the sections above:

              • source port: the port on which the VNC server is running (see Determining the source/destination port);

              • destination host: localhost;

              • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

              See for detailed information on how to configure PuTTY to set up the SSH tunnel, by entering the settings in the and fields in SSH tunnel.

              With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

              Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

              "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

              Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

              You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

              netstat -an | grep -i listen | grep tcp | grep 12345\n

              If you see no matching lines, then the port you picked is still available, and you can continue.

              If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

              $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
              "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

              In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

              To do this, run the following command:

              $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

              With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

              Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

              **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

              As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

              "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

              You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file that looks like TurboVNC64-2.1.2.exe (the version number can be different, but the 64 should be in the filename) and execute it.

              Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

              When prompted for a password, use the password you used to setup the VNC server.

              When prompted for default or empty panel, choose default.

              If you have an empty panel, you can reset your settings with the following commands:

              xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
              "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

              The VNC server can be killed by running

              vncserver -kill :6\n

              where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

              "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

              You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

              "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

              All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

              See HPC policies for more information on who is entitled to an account.

              The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

              There are two methods for connecting to HPC-UGent infrastructure:

              • Using a terminal to connect via SSH.
              • Using the web portal

              The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

              If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

              The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

              "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
              • an SSH public/private key pair can be seen as a lock and a key

              • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

              • the SSH private key is like a physical key: you don't hand it out to other people.

              • anyone who has the key (and the optional password) can unlock the door and log in to the account.

              • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

              Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). A typical Windows environment does not come with pre-installed software to connect and run command-line executables on a HPC. Some tools need to be installed on your Windows machine first, before we can start the actual work.

              "}, {"location": "account/#get-putty-a-free-telnetssh-client", "title": "Get PuTTY: A free telnet/SSH client", "text": "

              We recommend to use the PuTTY tools package, which is freely available.

              You do not need to install PuTTY, you can download the PuTTY and PuTTYgen executable and run it. This can be useful in situations where you do not have the required permissions to install software on the computer you are using. Alternatively, an installation package is also available.

              You can download PuTTY from the official address: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html. You probably want the 64-bits version. If you can install software on your computer, you can use the \"Package files\", if not, you can download and use putty.exe and puttygen.exe in the \"Alternative binary files\" section.

              The PuTTY package consists of several components, but we'll only use two:

              1. PuTTY: the Telnet and SSH client itself (to login, see Open a terminal)

              2. PuTTYgen: an RSA and DSA key generation utility (to generate a key pair, see Generate a public/private key pair)

              "}, {"location": "account/#generating-a-publicprivate-key-pair", "title": "Generating a public/private key pair", "text": "

              Before requesting a VSC account, you need to generate a pair of ssh keys. You need 2 keys, a public and a private key. You can visualise the public key as a lock to which only you have the key (your private key). You can send a copy of your lock to anyone without any problems, because only you can open it, as long as you keep your private key secure. To generate a public/private key pair, you can use the PuTTYgen key generator.

              Start PuTTYgen.exe it and follow these steps:

              1. In Parameters (at the bottom of the window), choose \"RSA\" and set the number of bits in the key to 4096.

              2. Click on Generate. To generate the key, you must move the mouse cursor over the PuTTYgen window (this generates some random data that PuTTYgen uses to generate the key pair). Once the key pair is generated, your public key is shown in the field Public key for pasting into OpenSSH authorized_keys file.

              3. Next, it is advised to fill in the Key comment field to make it easier identifiable afterwards.

              4. Next, you should specify a passphrase in the Key passphrase field and retype it in the Confirm passphrase field. Remember, the passphrase protects the private key against unauthorised use, so it is best to choose one that is not too easy to guess but that you can still remember. Using a passphrase is not required, but we recommend you to use a good passphrase unless you are certain that your computer's hard disk is encrypted with a decent password. (If you are not sure your disk is encrypted, it probably isn't.)

              5. Save both the public and private keys in a folder on your personal computer (We recommend to create and put them in the folder \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\") with the buttons Save public key and Save private key. We recommend using the name \"id_rsa.pub\" for the public key, and \"id_rsa.ppk\" for the private key.

              6. Finally, save an \"OpenSSH\" version of your private key (in particular for later \"X2Go\" usage, see x2go) by entering the \"Conversions\" menu and selecting \"Export OpenSSH key\" (do not select the \"force new file format\" variant). Save the file in the same location as in the previous step with filename \"id_rsa\". (If there is no \"Conversions\" menu, you must update your \"puttygen\" version. If you want to do this conversion afterwards, you can start with loading an existing \"id_rsa.ppk\" and only do this conversions export.)

              If you use another program to generate a key pair, please remember that they need to be in the OpenSSH format to access the HPC clusters.

              "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

              It is possible to setup a SSH agent in Windows. This is an optional configuration to help you to keep all your SSH keys (if you have several) stored in the same key ring to avoid to type the SSH key password each time. The SSH agent is also necessary to enable SSH hops with key forwarding from Windows.

              Pageant is the SSH authentication agent used in windows. This agent should be available from the PuTTY installation package https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html or as stand alone binary package.

              After the installation just start the Pageant application in Windows, this will start the agent in background. The agent icon will be visible from the Windows panel.

              At this point the agent does not contain any private key. You should include the private key(s) generated in the previous section Generating a public/private key pair.

              1. Click on Add key

              2. Select the private key file generated in Generating a public/private key pair (\"id_rsa.ppk\" by default).

              3. Enter the same SSH key password used to generate the key. After this step the new key will be included in Pageant to manage the SSH connections.

              4. You can see the SSH key(s) available in the key ring just clicking on View Keys.

              5. You can change PuTTY setup to use the SSH agent. Open PuTTY and check Connection > SSH > Auth > Allow agent forwarding.

              Now you can connect to the login nodes as usual. The SSH agent will know which SSH key should be used and you do not have to type the SSH passwords each time, this task is done by Pageant agent automatically.

              It is also possible to use WinSCP with Pageant, see https://winscp.net/eng/docs/ui_pageant for more details.

              "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

              Visit https://account.vscentrum.be/

              You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

              Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

              Click Confirm

              You will now be taken to the authentication page of your institute.

              You will now have to log in with CAS using your UGent account.

              You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

              After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

              This file should have been stored in the directory \"C:\\Users\\%USERNAME%\\AppData\\Local\\PuTTY\\.ssh\"

              After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

              "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

              Within one day, you should receive a Welcome e-mail with your VSC account details.

              Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

              Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

              "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

              In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

              1. Create a new public/private SSH key pair from Putty. Repeat the process described in section\u00a0Generate a public/private key pair.

              2. Go to https://account.vscentrum.be/django/account/edit

              3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

              4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

              5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

              "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

              A typical Computation workflow will be:

              1. Connect to the HPC

              2. Transfer your files to the HPC

              3. Compile your code and test it

              4. Create a job script

              5. Submit your job

              6. Wait while

                1. your job gets into the queue

                2. your job gets executed

                3. your job finishes

              7. Move your results

              We'll take you through the different tasks one by one in the following chapters.

              "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

              AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

              See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

              "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

              This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

              • AlphaFold website: https://alphafold.com/
              • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
              • AlphaFold FAQ: https://alphafold.com/faq
              • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
              • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
              • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
                • recording available on YouTube
                • slides available here (PDF)
                • see also https://www.vscentrum.be/alphafold
              "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

              Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

              $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

              To use AlphaFold, you should load a particular module, for example:

              module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

              We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

              Warning

              When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

              Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

              $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

              The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

              As of writing this documentation the latest version is 20230310.

              Info

              The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

              The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

              "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

              The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

              export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

              Use newest version

              Do not forget to replace 20230310 with a more up to date version if available.

              "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

              AlphaFold provides a script called run_alphafold.py

              A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

              The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

              Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

              For more information about the script and options see this section in the official README.

              READ README

              It is strongly advised to read the official README provided by DeepMind before continuing.

              "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

              The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

              Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

              Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

              Info

              Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

              "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

              The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

              Using --db_preset=full_dbs, the following runtime data was collected:

              • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
              • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
              • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
              • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

              This highlights a couple of important attention points:

              • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
              • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
              • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

              With --db_preset=casp14, it is clearly more demanding:

              • On doduo, with 24 cores (1 node): still running after 48h...
              • On joltik, 1 V100 GPU + 8 cores: 4h 48min

              This highlights the difference between CPU and GPU performance even more.

              "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

              The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

              Do not forget to set up the environment (see above: Setting up the environment).

              "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

              Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

              >sequence_name\n<SEQUENCE>\n

              Then run the following command in the same directory:

              alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

              See AlphaFold output, for information about the outputs.

              Info

              For more scenarios see the example section in the official README.

              "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

              The following two example job scripts can be used as a starting point for running AlphaFold.

              The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

              To run the job scripts you need to create a file named T1050.fasta with the following content:

              >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
              source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

              "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

              Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

              Swap to the joltik GPU before submitting it:

              module swap cluster/joltik\n
              AlphaFold-gpu-joltik.sh
              #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
              "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

              Jobscript that runs AlphaFold on CPU using 24 cores on one node.

              AlphaFold-cpu-doduo.sh
              #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

              In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

              "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

              Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

              One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

              For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

              This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

              "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

              Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

              The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

              In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

              If these limitations are a problem for you, please let us know via hpc@ugent.be.

              "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

              All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

              "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

              Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

              Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

              # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
              "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

              For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

              We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

              "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

              Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

              cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

              Create a job script like:

              #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

              Create an example myscript.sh:

              #!/bin/bash\n\n# prime factors\nfactor 1234567\n
              "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

              We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

              Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

              cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
              #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

              You can download linear_regression.py from the official Tensorflow repository.

              "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

              It is also possible to execute MPI jobs within a container, but the following requirements apply:

              • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

              • Use modules within the container (install the environment-modules or lmod package in your container)

              • Load the required module(s) before apptainer execution.

              • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

              Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

              cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

              For example to compile an MPI example:

              module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

              Example MPI job script:

              #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
              "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
              1. Before starting, you should always check:

                • Are there any errors in the script?

                • Are the required modules loaded?

                • Is the correct executable used?

              2. Check your computer requirements upfront, and request the correct resources in your batch job script.

                • Number of requested cores

                • Amount of requested memory

                • Requested network type

              3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

              4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

              5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

              6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

              7. Submit your job and wait (be patient) ...

              8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

              9. The runtime is limited by the maximum walltime of the queues.

              10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

              11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

              12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

              "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

              All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

              Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

              "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

              In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

              module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

              Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

              module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

              When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

              "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

              To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

              In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

              In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

              Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

              Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

              Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

              "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

              Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

              All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

              A typical process looks like:

              1. Copy your software to the login-node of the HPC

              2. Start an interactive session on a compute node;

              3. Compile it;

              4. Test it locally;

              5. Generate your job scripts;

              6. Test it on the HPC

              7. Run it (in parallel);

              We assume you've copied your software to the HPC. The next step is to request your private compute node.

              $ qsub -I\nqsub: waiting for job 123456 to start\n
              "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

              Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

              cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

              We now list the directory and explore the contents of the \"hello.c\" program:

              $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

              hello.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

              The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

              We first need to compile this C-file into an executable with the gcc-compiler.

              First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

              $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

              A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

              Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

              $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

              It seems to work, now run it on the HPC

              qsub hello.pbs\n

              "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
              cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

              List the directory and explore the contents of the \"mpihello.c\" program:

              $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

              mpihello.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

              The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

              Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

              mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

              A new file \"hello\" has been created. Note that this program has \"execute\" rights.

              Let's test this program on the \"login\" node first:

              $ ./mpihello\nHello World from Node 0.\n

              It seems to work, now run it on the HPC.

              qsub mpihello.pbs\n
              "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

              We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

              cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

              We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

              module purge\nmodule load intel\n

              Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

              mpiicc -o mpihello mpihello.c\nls -l\n

              Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

              $ ./mpihello\nHello World from Node 0.\n

              It seems to work, now run it on the HPC.

              qsub mpihello.pbs\n

              Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

              Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

              Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

              Before you can really start using the HPC clusters, there are several things you need to do or know:

              1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

              2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

              3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

              4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

              "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

              Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

              VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

              All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

              • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

              • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

                • While this web connection is active new SSH sessions can be started.

                • Active SSH sessions will remain active even when this web page is closed.

              • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

              Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

              ssh_exchange_identification: read: Connection reset by peer\n
              "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

              The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

              If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

              "}, {"location": "connecting/#open-a-terminal", "title": "Open a Terminal", "text": "

              You've generated a public/private key pair with PuTTYgen and have an approved account on the VSC clusters. The next step is to setup the connection to (one of) the HPC.

              In the screenshots, we show the setup for user \"vsc20167\"

              to the HPC cluster via the login node \"login.hpc.ugent.be\".

              1. Start the PuTTY executable putty.exe in your directory C:\\Program Files (x86)\\PuTTY and the configuration screen will pop up. As you will often use the PuTTY tool, we recommend adding a shortcut on your desktop.

              2. Within the category <Session>, in the field <Host Name>, enter the name of the login node of the cluster (i.e., \"login.hpc.ugent.be\") you want to connect to.

              3. In the category Connection > Data, in the field Auto-login username, put in <vsc40000> , which is your VSC username that you have received by e-mail after your request was approved.

              4. In the category Connection > SSH > Auth, in the field Private key file for authentication click on Browse and select the private key (i.e., \"id_rsa.ppk\") that you generated and saved above.

              5. In the category Connection > SSH > X11, click the Enable X11 Forwarding checkbox.

              6. Now go back to <Session>, and fill in \"hpcugent\" in the Saved Sessions field and press Save to store the session information.

              7. Now pressing Open, will open a terminal window and asks for you passphrase.

              8. If this is your first time connecting, you will be asked to verify the authenticity of the login node. Please see section\u00a0Warning message when first connecting to new host on how to do this.

              9. After entering your correct passphrase, you will be connected to the login-node of the HPC.

              10. To check you can now \"Print the Working Directory\" (pwd) and check the name of the computer, where you have logged in (hostname):

                $ pwd\n/user/home/gent/vsc400/vsc40000\n$ hostname -f\ngligar07.gastly.os\n
              11. For future PuTTY sessions, just select your saved session (i.e. \"hpcugent\") from the list, Load it and press Open.

              Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

              $ pwd\n/user/home/gent/vsc400/vsc40000\n

              Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

              $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

              This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

              You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

              As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

              $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

              This directory contains:

              1. This HPC Tutorial (in either a Mac, Linux or Windows version).

              2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

              cd examples\n

              Tip

              Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

              Tip

              For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

              The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

              cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

              Go to your home directory, check your own private examples directory, ...\u00a0and start working.

              cd\nls -l\n

              Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

              Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

              You can exit the connection at anytime by entering:

              $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

              tip: Setting your Language right

              You may encounter a warning message similar to the following one during connecting:

              perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
              or any other error message complaining about the locale.

              This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

              LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

              A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

              Note

              If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

              "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

              Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back.

              "}, {"location": "connecting/#winscp", "title": "WinSCP", "text": "

              To transfer files to and from the cluster, we recommend the use of WinSCP, a graphical file management tool which can transfer files using secure protocols such as SFTP and SCP. WinSCP is freely available from http://www.winscp.net.

              To transfer your files using WinSCP,

              1. Open the program

              2. The Login menu is shown automatically (if it is closed, click New Session to open it again). Fill in the necessary fields under Session

                1. Click New Site.

                2. Enter \"login.hpc.ugent.be\" in the Host name field.

                3. Enter your \"vsc-account\" in the User name field.

                4. Select SCP as the file protocol.

                5. Note that the password field remains empty.

                1. Click Advanced....

                2. Click SSH > Authentication.

                3. Select your private key in the field Private key file.

              3. Press the Save button, to save the session under Session > Sites for future access.

              4. Finally, when clicking on Login, you will be asked for your key passphrase.

              The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

              Make sure the fingerprint in the alert matches one of the following:

              - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

              If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

              Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

              Now, try out whether you can transfer an arbitrary file from your local machine to the HPC and back.

              "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

              See the section on rsync in chapter 5 of the Linux intro manual.

              "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

              It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

              For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

              ssh gligar07.gastly.os\n
              This is also possible the other way around.

              If you want to find out which login host you are connected to, you can use the hostname command.

              $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

              Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

              • screen
              • tmux
              "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

              It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

              In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

              Check if any cron script is already set in the current login node with:

              crontab -l\n

              At this point you can add/edit (with vi editor) any cron script running the command:

              crontab -e\n
              "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
               15 5 * * * ~/runscript.sh >& ~/job.out\n

              where runscript.sh has these lines in this example:

              runscript.sh
              #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

              In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

              Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

              ssh gligar07    # or gligar08\n
              "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

              You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

              EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

              "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

              For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

              • applying custom patches to the software that only you or your group are using

              • evaluating new software versions prior to requesting a central software installation

              • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

              "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

              Before you use EasyBuild, you need to configure it:

              "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

              This is where EasyBuild can find software sources:

              EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
              • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

              • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

              "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

              This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

              export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

              On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

              "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

              This is where EasyBuild will install the software (and accompanying modules) to.

              For example, to let it use $VSC_DATA/easybuild, use:

              export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

              Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

              Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

              To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

              "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

              Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

              module load EasyBuild\n
              "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

              EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

              $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

              For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

              eb example-1.2.1-foss-2024a.eb --robot\n
              "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

              To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

              To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

              eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

              To try to install example v1.2.5 with a different compiler toolchain:

              eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
              "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

              To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

              "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

              To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

              module use $EASYBUILD_INSTALLPATH/modules/all\n

              It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

              "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

              As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

              Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

              Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

              There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

              Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

              Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

              This chapter shows you how to measure:

              1. Walltime
              2. Memory usage
              3. CPU usage
              4. Disk (storage) needs
              5. Network bottlenecks

              First, we allocate a compute node and move to our relevant directory:

              qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
              "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

              One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

              The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

              Test the time command:

              $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

              It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

              It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

              The walltime can be specified in a job scripts as:

              #PBS -l walltime=3:00:00:00\n

              or on the command line

              qsub -l walltime=3:00:00:00\n

              It is recommended to always specify the walltime for a job.

              "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

              In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

              "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

              The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

              $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

              Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

              It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

              On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

              "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

              To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

              top

              provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

              htop

              is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

              "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

              Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

              The maximum amount of physical memory used by the job per node can be specified in a job script as:

              #PBS -l mem=4gb\n

              or on the command line

              qsub -l mem=4gb\n
              "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

              Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

              "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

              The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

              The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

              $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

              Or if you want to see it in a more readable format, execute:

              $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

              Note

              Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

              In order to specify the number of nodes and the number of processors per node in your job script, use:

              #PBS -l nodes=N:ppn=M\n

              or with equivalent parameters on the command line

              qsub -l nodes=N:ppn=M\n

              This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

              You can also use this statement in your job script:

              #PBS -l nodes=N:ppn=all\n

              to request all cores of a node, or

              #PBS -l nodes=N:ppn=half\n

              to request half of them.

              Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

              "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

              This could also be monitored with the htop command:

              htop\n
              Example output:
                1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

              The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

              If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

              "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

              It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

              But how can you maximise?

              1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
              2. Develop your parallel program in a smart way.
              3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
              4. Correct your request for CPUs in your job script.
              "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

              On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

              The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

              The load averages differ from CPU percentage in two significant ways:

              1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
              2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
              "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

              What is the \"optimal load\" rule of thumb?

              The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

              In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

              Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

              The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

              1. When you are running computational intensive applications, one application per processor will generate the optimal load.
              2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

              The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

              The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

              "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

              The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

              The uptime command will show us the average load

              $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

              Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

              $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
              You can also read it in the htop command.

              "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

              It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

              But how can you maximise?

              1. Profile your software to improve its performance.
              2. Configure your software (e.g., to exactly use the available amount of processors in a node).
              3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
              4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
              5. Correct your request for CPUs in your job script.

              And then check again.

              "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

              Some programs generate intermediate or output files, the size of which may also be a useful metric.

              Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

              It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

              Several actions can be taken, to avoid storage problems:

              1. Be aware of all the files that are generated by your program. Also check out the hidden files.
              2. Check your quota consumption regularly.
              3. Clean up your files regularly.
              4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
              5. Make sure your programs clean up their temporary files after execution.
              6. Move your output results to your own computer regularly.
              7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
              "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

              Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

              Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

              The parameter to add in your job script would be:

              #PBS -l ib\n

              If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

              #PBS -l gbe\n
              "}, {"location": "getting_started/", "title": "Getting Started", "text": "

              Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

              In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

              Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

              "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

              To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

              If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

              "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
              1. Connect to the login nodes
              2. Transfer your files to the HPC-UGent infrastructure
              3. Optional: compile your code and test it
              4. Create a job script and submit your job
              5. Wait for job to be executed
              6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

              We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

              "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

              There are two options to connect

              • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
              • Using the web portal

              Considering your operating system is Windows, it is recommended to use the web portal.

              The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

              See shell access when using the web portal, or connection to the HPC-UGent infrastructure when using a terminal.

              Make sure you can get to a shell access to the HPC-UGent infrastructure before proceeding with the next steps.

              Info

              When having problems see the connection issues section on the troubleshooting page.

              "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

              Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

              Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

              The HPC-UGent web portal provides a file browser that allows uploading files. For more information see the file browser section.

              Upload both files (run.sh and tensorflow-mnist.py) to your home directory and go back to your shell.

              Info

              As an alternative, you can use WinSCP (see our section)

              When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

              $ ls ~\nrun.sh tensorflow_mnist.py\n

              When you do not see these files, make sure you uploaded the files to your home directory.

              "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

              Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

              A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

              Our job script looks like this:

              run.sh

              #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
              As you can see this job script will run the Python script named tensorflow_mnist.py.

              The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

              module swap cluster/donphan\n

              Tip

              When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

              To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

              This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

              $ qsub run.sh\n123456\n

              This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

              Make sure you understand what the module command does

              Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

              It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

              When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

              For detailed information about module commands, read the running batch jobs chapter.

              "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

              Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

              You can get an overview of the active jobs using the qstat command:

              $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

              Eventually, after entering qstat again you should see that your job has started running:

              $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

              If you don't see your job in the output of the qstat command anymore, your job has likely completed.

              Read this section on how to interpret the output.

              "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

              When your job finishes it generates 2 output files:

              • One for normal output messages (stdout output channel).
              • One for warning and error messages (stderr output channel).

              By default located in the directory where you issued qsub.

              Info

              For more information about the stdout and stderr output channels, see this section.

              In our example when running ls in the current directory you should see 2 new files:

              • run.sh.o123456, containing normal output messages produced by job 123456;
              • run.sh.e123456, containing errors and warnings produced by job 123456.

              Info

              run.sh.e123456 should be empty (no errors or warnings).

              Use your own job ID

              Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

              When examining the contents of run.sh.o123456 you will see something like this:

              Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

              Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

              Warning

              When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

              For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

              "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
              • Running interactive jobs
              • Running jobs with input/output data
              • Multi core jobs/Parallel Computing
              • Interactive and debug cluster

              For more examples see Program examples and Job script examples

              "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

              To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

              module swap cluster/joltik\n

              To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

              module swap cluster/accelgor\n

              Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

              "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

              To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

              Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

              "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

              See https://www.ugent.be/hpc/en/infrastructure.

              "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

              There are 2 main ways to ask for GPUs as part of a job:

              • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

              • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

              Some background:

              • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

              • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

              "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

              Some important attention points:

              • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

              • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

              • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

              • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

              "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

              Use module avail to check for centrally installed software.

              The subsections below only cover a couple of installed software packages, more are available.

              "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

              Please consult module avail GROMACS for a list of installed versions.

              "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

              Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

              Please consult module avail Horovod for a list of installed versions.

              Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

              At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

              "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

              Please consult module avail PyTorch for a list of installed versions.

              "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

              Please consult module avail TensorFlow for a list of installed versions.

              Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

              "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
              #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
              "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

              Please consult module avail AlphaFold for a list of installed versions.

              For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

              "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

              In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

              "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

              The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

              This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

              Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

              • Interactive jobs (see chapter\u00a0Running interactive jobs)

              • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

              • Jobs requiring few resources

              • Debugging programs

              • Testing and debugging job scripts

              "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

              To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

              module swap cluster/donphan\n

              Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

              "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

              Some limits are in place for this cluster:

              • each user may have at most 5 jobs in the queue (both running and waiting to run);

              • at most 3 jobs per user can be running at the same time;

              • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

              In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

              Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

              "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

              Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

              All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

              "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

              \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

              While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

              A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

              The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

              Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

              Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

              "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

              The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

              The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

              The HPC currently consists of:

              a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

              Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

              "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

              The HPC infrastructure is not a magic computer that automatically:

              1. runs your PC-applications much faster for bigger problems;

              2. develops your applications;

              3. solves your bugs;

              4. does your thinking;

              5. ...

              6. allows you to play games even faster.

              The HPC does not replace your desktop computer.

              "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

              Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

              It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

              "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

              In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

              Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

              "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

              Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

              Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

              The two parallel programming paradigms most used in HPC are:

              • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

              • MPI for distributed memory systems (multiprocessing): on multiple nodes

              Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

              "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

              Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

              It is perfectly possible to also run purely sequential programs on the HPC.

              Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

              "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

              You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

              For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

              Supported and commonly used compilers are GCC and Intel.

              Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

              "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

              All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

              Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

              A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

              "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

              A typical workflow looks like:

              1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

              2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

              3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

              4. Create a job script and submit your job (see Running batch jobs)

              5. Get some coffee and be patient:

                1. Your job gets into the queue

                2. Your job gets executed

                3. Your job finishes

              6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

              "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

              When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

              Do not hesitate to contact the HPC staff for any help.

              1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

              "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

              This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

              • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

              • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

              • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

              • To use a situational parameter, remove one '#' at the beginning of the line.

              simple_jobscript.sh
              #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
              "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

              Here's an example of a single-core job script:

              single_core.sh
              #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
              1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

              2. A module for Python 3.6 is loaded, see also section Modules.

              3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

              4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

              5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

              "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

              Here's an example of a multi-core job script that uses mympirun:

              multi_core.sh
              #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

              An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

              "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

              If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

              This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

              timeout.sh
              #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

              The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

              example_program.sh
              #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
              "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

              A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

              "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

              Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

              After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

              When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

              and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

              This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

              "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

              A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

              To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

              We can see all available versions of the SciPy module by using module avail SciPy-bundle:

              $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

              Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

              Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

              $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

              The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

              It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

              $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
              This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

              If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

              $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

              Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

              "}, {"location": "known_issues/", "title": "Known issues", "text": "

              This page provides details on a couple of known problems, and the workarounds that are available for them.

              If you have any questions related to these issues, please contact the HPC-UGent team.

              • Operation not permitted error for MPI applications
              "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

              When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

              Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

              This error means that an internal problem has occurred in OpenMPI.

              "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

              This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

              It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

              "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

              We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

              "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

              A workaround as been implemented in mympirun (version 5.4.0).

              Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

              module load vsc-mympirun\n

              and launch your MPI application using the mympirun command.

              For more information, see the mympirun documentation.

              "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

              If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

              export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
              "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

              We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

              "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

              There are two important motivations to engage in parallel programming.

              1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

              2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

              On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

              There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

              Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

              Tip

              You can request more nodes/cores by adding following line to your run script.

              #PBS -l nodes=2:ppn=10\n
              This queues a job that claims 2 nodes and 10 cores.

              Warning

              Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

              "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

              Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

              This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

              Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

              Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

              Go to the example directory:

              cd ~/examples/Multi-core-jobs-Parallel-Computing\n

              Note

              If the example directory is not yet present, copy it to your home directory:

              cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

              Study the example first:

              T_hello.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

              And compile it (whilst including the thread library) and run and test it on the login-node:

              $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

              Now, run it on the cluster and check the output:

              $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

              Tip

              If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

              "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

              OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

              An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

              Here is the general code structure of an OpenMP program:

              #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

              "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

              By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

              "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

              Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

              omp1.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

              And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

              $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

              Now run it in the cluster and check the result again.

              $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
              "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

              Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

              omp2.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

              And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

              $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

              Now run it in the cluster and check the result again.

              $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
              "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

              Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

              omp3.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

              And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

              $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

              Now run it in the cluster and check the result again.

              $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
              "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

              There are a host of other directives you can issue using OpenMP.

              Some other clauses of interest are:

              1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

              2. nowait: threads will not wait until everybody is finished

              3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

              4. if: allows you to parallelise only if a certain condition is met

              5. ...\u00a0and a host of others

              Tip

              If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

              "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

              The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

              In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

              The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

              The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

              One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

              Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

              Study the MPI-programme and the PBS-file:

              mpi_hello.c
              /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
              mpi_hello.pbs
              #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

              and compile it:

              $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

              mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

              Run the parallel program:

              $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

              The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

              MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

              Tip

              mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

              You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

              Tip

              If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

              "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

              A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

              Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

              These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

              One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

              The \"Worker framework\" has been developed to address this issue.

              It can handle many small jobs determined by:

              parameter variations

              i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

              job arrays

              i.e., each individual job got a unique numeric identifier.

              Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

              However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

              "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

              First go to the right directory:

              cd ~/examples/Multi-job-submission/par_sweep\n

              Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

              $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

              For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

              par_sweep/weather
              #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

              A job script that would run this as a job for the first parameters (p01) would then look like:

              par_sweep/weather_p01.pbs
              #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

              When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

              To submit the job, the user would use:

               $ qsub weather_p01.pbs\n
              However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

              $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

              It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

              In order to make our PBS generic, the PBS file can be modified as follows:

              par_sweep/weather.pbs
              #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

              Note that:

              1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

              2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

              3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

              The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

              The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

              $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

              Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

              Warning

              When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

              module swap env/slurm/donphan\n

              instead of

              module swap cluster/donphan\n
              We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

              "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

              First go to the right directory:

              cd ~/examples/Multi-job-submission/job_array\n

              As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

              The following bash script would submit these jobs all one by one:

              #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

              This, as said before, could be disturbing for the job scheduler.

              Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

              Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

              The details are

              1. a job is submitted for each number in the range;

              2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

              3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

              The job could have been submitted using:

              qsub -t 1-100 my_prog.pbs\n

              The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

              To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

              A typical job script for use with job arrays would look like this:

              job_array/job_array.pbs
              #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

              In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

              Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

              $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

              For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

              job_array/test_set
              #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

              Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

              job_array/test_set.pbs
              #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

              Note that

              1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

              2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

              The job is now submitted as follows:

              $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

              The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

              Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

              $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
              "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

              Often, an embarrassingly parallel computation can be abstracted to three simple steps:

              1. a preparation phase in which the data is split up into smaller, more manageable chunks;

              2. on these chunks, the same algorithm is applied independently (these are the work items); and

              3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

              The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

              cd ~/examples/Multi-job-submission/map_reduce\n

              The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

              First study the scripts:

              map_reduce/pre.sh
              #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
              map_reduce/post.sh
              #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

              Then one can submit a MapReduce style job as follows:

              $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

              Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

              "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

              The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

              The \"Worker Framework\" will be effective when

              1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

              2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

              "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

              Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

              tail -f run.pbs.log123456\n

              Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

              watch -n 60 wsummarize run.pbs.log123456\n

              This will summarise the log file every 60 seconds.

              "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

              Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

              #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

              Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

              Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

              "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

              Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

              wresume -jobid 123456\n

              This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

              wresume -l walltime=1:30:00 -jobid 123456\n

              Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

              wresume -jobid 123456 -retry\n

              By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

              "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

              This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

              $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
              "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

              When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

              to check for the available versions of worker, use the following command:

              $ module avail worker\n
              1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

              "}, {"location": "mympirun/", "title": "Mympirun", "text": "

              mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

              In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

              "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

              Before using mympirun, we first need to load its module:

              module load vsc-mympirun\n

              As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

              The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

              For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

              "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

              There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

              By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

              "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

              This is the most commonly used option for controlling the number of processing.

              The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

              $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
              "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

              There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

              See vsc-mympirun README for a detailed explanation of these options.

              "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

              You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

              $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
              "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

              In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

              "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

              There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

              • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

                • see also http://openfoam.com/history/
              • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

                • see also https://openfoam.org/download/history/
              • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

              Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

              "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

              The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

              • OpenFOAM websites:

                • https://openfoam.com

                • https://openfoam.org

                • http://wikki.gridcore.se/foam-extend

              • OpenFOAM user guides:

                • https://www.openfoam.com/documentation/user-guide

                • https://cfd.direct/openfoam/user-guide/

              • OpenFOAM C++ source code guide: https://cpp.openfoam.org

              • tutorials: https://wiki.openfoam.com/Tutorials

              • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

              Other useful OpenFOAM documentation:

              • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

              • http://www.dicat.unige.it/guerrero/openfoam.html

              "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

              To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

              "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

              First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

              $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

              To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

              To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

              module load OpenFOAM/11-foss-2023a\n
              "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

              OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

              source $FOAM_BASH\n
              "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

              If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

              source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

              Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

              "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

              If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

              unset $FOAM_SIGFPE\n

              Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

              As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

              "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

              The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

              • generate the mesh;

              • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

              After running the simulation, some post-processing steps are typically performed:

              • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

              • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

              Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

              Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

              One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

              For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

              "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

              For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

              "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

              When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

              You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

              "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

              It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

              See Basic usage for how to get started with mympirun.

              To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

              export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

              Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

              "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

              To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

              Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

              number of processor directories = 4 is not equal to the number of processors = 16\n

              In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

              • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

              • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

              See Controlling number of processes to control the number of process mympirun will start.

              This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

              To visualise the processor domains, use the following command:

              mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

              and then load the VTK files generated in the VTK folder into ParaView.

              "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

              OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

              Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

              • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

              • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

              • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

              • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

              • if the results per individual time step are large, consider setting writeCompression to true;

              For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

              For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

              These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

              "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

              See https://cfd.direct/openfoam/user-guide/compiling-applications/.

              "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

              Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

              OpenFOAM_damBreak.sh
              #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
              "}, {"location": "program_examples/", "title": "Program examples", "text": "

              If you have not done so already copy our examples to your home directory by running the following command:

               cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

              ~(tilde) refers to your home directory, the directory you arrive by default when you login.

              Go to our examples:

              cd ~/examples/Program-examples\n

              Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

              1. 01_Python

              2. 02_C_C++

              3. 03_Matlab

              4. 04_MPI_C

              5. 05a_OMP_C

              6. 05b_OMP_FORTRAN

              7. 06_NWChem

              8. 07_Wien2k

              9. 08_Gaussian

              10. 09_Fortran

              11. 10_PQS

              The above 2 OMP directories contain the following examples:

              C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

              Compile by any of the following commands:

              Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

              Be invited to explore the examples.

              "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

              Remember to substitute the usernames, login nodes, file names, ...for your own.

              Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

              Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

              "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

              Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

              This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

              It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

              "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

              As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

              For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

              $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

              Initially there will be only one RHEL 9 login node. As needed a second one will be added.

              When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

              "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

              To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

              This includes (per user):

              • max. of 2 CPU cores in use
              • max. 8 GB of memory in use

              For more intensive tasks you can use the interactive and debug clusters through the web portal.

              "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

              The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

              However, there will be impact on the availability of software that is made available via modules.

              Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

              This includes all software installations on top of a compiler toolchain that is older than:

              • GCC(core)/12.3.0
              • foss/2023a
              • intel/2023a
              • gompi/2023a
              • iimpi/2023a
              • gfbf/2023a

              (or another toolchain with a year-based version older than 2023a)

              The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

              foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

              If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

              It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

              "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

              We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

              cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

              Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

              We will keep this page up to date when more specific dates have been planned.

              Warning

              This planning below is subject to change, some clusters may get migrated later than originally planned.

              Please check back regularly.

              "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

              If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

              "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

              In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

              When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

              The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

              "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

              Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

              "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

              The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

              All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

              "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

              In order to administer the active software and their environment variables, the module system has been developed, which:

              1. Activates or deactivates software packages and their dependencies.

              2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

              3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

              4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

              5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

              This is all managed with the module command, which is explained in the next sections.

              There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

              "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

              A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

              module available\n

              It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

              This will give some output such as:

              module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

              Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

              module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

              This gives a full list of software packages that can be loaded.

              The casing of module names is important: lowercase and uppercase letters matter in module names.

              "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

              The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

              Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

              E.g., foss/2024a is the first version of the foss toolchain in 2024.

              The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

              "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

              To \"activate\" a software package, you load the corresponding module file using the module load command:

              module load example\n

              This will load the most recent version of example.

              For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

              **However, you should specify a particular version to avoid surprises when newer versions are installed:

              module load secondexample/2.7-intel-2016b\n

              The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

              Modules need not be loaded one by one; the two module load commands can be combined as follows:

              module load example/1.2.3 secondexample/2.7-intel-2016b\n

              This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

              "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

              Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

              $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

              You can also just use the ml command without arguments to list loaded modules.

              It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

              "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

              To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

              $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

              To unload the secondexample module, you can also use ml -secondexample.

              Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

              "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

              In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

              module purge\n
              This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

              "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

              Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

              Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

              module load example\n

              rather than

              module load example/1.2.3\n

              Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

              Consider the following example modules:

              $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

              Let's now generate a version conflict with the example module, and see what happens.

              $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

              Note: A module swap command combines the appropriate module unload and module load commands.

              "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

              With the module spider command, you can search for modules:

              $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

              It's also possible to get detailed information about a specific module:

              $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
              "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

              To get a list of all possible commands, type:

              module help\n

              Or to get more information about one specific module package:

              $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
              "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

              If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

              In each module command shown below, you can replace module with ml.

              First, load all modules you want to include in the collections:

              module load example/1.2.3 secondexample/2.7-intel-2016b\n

              Now store it in a collection using module save. In this example, the collection is named my-collection.

              module save my-collection\n

              Later, for example in a jobscript or a new session, you can load all these modules with module restore:

              module restore my-collection\n

              You can get a list of all your saved collections with the module savelist command:

              $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

              To get a list of all modules a collection will load, you can use the module describe command:

              $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

              To remove a collection, remove the corresponding file in $HOME/.lmod.d:

              rm $HOME/.lmod.d/my-collection\n
              "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

              To see how a module would change the environment, you can use the module show command:

              $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

              It's also possible to use the ml show command instead: they are equivalent.

              Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

              You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

              If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

              "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

              To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

              "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

              You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

              You can also get this information in text form (per cluster separately) with the pbsmon command:

              $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

              pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

              "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

              Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

              As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

              Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

              cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

              First go to the directory with the first examples by entering the command:

              cd ~/examples/Running-batch-jobs\n

              Each time you want to execute a program on the HPC you'll need 2 things:

              The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

              A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

              1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

              Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

              List and check the contents with:

              $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

              In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

              1. The Perl script calculates the first 30 Fibonacci numbers.

              2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

              We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

              On the command line, you would run this using:

              $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

              Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

              The job script contains a description of the job by specifying the command that need to be executed on the compute node:

              fibo.pbs
              #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

              So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

              This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

              $ qsub fibo.pbs\n123456\n

              The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

              Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

              To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

              Your job is now waiting in the queue for a free workernode to start on.

              Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

              After your job was started, and ended, check the contents of the directory:

              $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

              Explore the contents of the 2 new files:

              $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

              These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

              "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

              In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

              The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

              • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

              • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

              • Time waiting in queue: queued jobs get a higher priority over time.

              • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

              • Whether or not you are a member of a Virtual Organisation (VO).

                Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

                If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

                See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

              Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

              It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

              Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

              "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

              To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

              By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

              module swap cluster/donphan\n

              Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

              To list the available cluster modules, you can use the module avail cluster/ command:

              $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

              As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

              The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

              "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

              It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

              To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

              Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

              env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

              We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

              We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

              "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

              Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

              qstat 12345\n

              To show on which compute nodes your job is running, at least, when it is running:

              qstat -n 12345\n

              To remove a job from the queue so that it will not run, or to stop a job that is already running.

              qdel 12345\n

              When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

              $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

              Here:

              Job ID the job's unique identifier

              Name the name of the job

              User the user that owns the job

              Time Use the elapsed walltime for the job

              Queue the queue the job is in

              The state S can be any of the following:

              State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

              User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

              "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

              There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

              "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

              Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

              It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

              "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

              The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

              qsub -l walltime=2:30:00 ...\n

              For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

              The maximum walltime for HPC-UGent clusters is 72 hours.

              If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

              qsub -l mem=4gb ...\n

              The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

              The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

              qsub -l nodes=5:ppn=2 ...\n

              The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

              qsub -l nodes=1:westmere\n

              The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

              These options can either be specified on the command line, e.g.

              qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

              or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

              #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

              Note that the resources requested on the command line will override those specified in the PBS file.

              "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

              At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

              When you navigate to that directory and list its contents, you should see them:

              $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

              In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

              Inspect the generated output and error files:

              $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
              "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

              You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

              #PBS -m b \n#PBS -m e \n#PBS -m a\n

              or

              #PBS -m abe\n

              These options can also be specified on the command line. Try it and see what happens:

              qsub -m abe fibo.pbs\n

              The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

              qsub -m b -M john.smith@example.com fibo.pbs\n

              will send an e-mail to john.smith@example.com when the job begins.

              "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

              If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

              So the following example might go wrong:

              $ qsub job1.sh\n$ qsub job2.sh\n

              You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

              $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

              afterok means \"After OK\", or in other words, after the first job successfully completed.

              It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

              1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

              "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

              Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

              Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

              Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

              The syntax for qsub for submitting an interactive PBS job is:

              $ qsub -I <... pbs directives ...>\n
              "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

              Tip

              Find the code in \"~/examples/Running_interactive_jobs\"

              First of all, in order to know on which computer you're working, enter:

              $ hostname -f\ngligar07.gastly.os\n

              This means that you're now working on the login node gligar07.gastly.os of the cluster.

              The most basic way to start an interactive job is the following:

              $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

              There are two things of note here.

              1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

              2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

              In order to know on which compute-node you're working, enter again:

              $ hostname -f\nnode3501.doduo.gent.vsc\n

              Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

              Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

              $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

              You can exit the interactive session with:

              $ exit\n

              Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

              You can work for 3 hours by:

              qsub -I -l walltime=03:00:00\n

              If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

              "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

              To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

              The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

              "}, {"location": "running_interactive_jobs/#install-xming", "title": "Install Xming", "text": "

              The first task is to install the Xming software.

              1. Download the Xming installer from the following address: http://www.straightrunning.com/XmingNotes/. Either download Xming from the Public Domain Releases (free) or from the Website Releases (after a donation) on the website.

              2. Run the Xming setup program on your Windows desktop.

              3. Keep the proposed default folders for the Xming installation.

              4. When selecting the components that need to be installed, make sure to select \"XLaunch wizard\" and \"Normal PuTTY Link SSH client\".

              5. We suggest to create a Desktop icon for Xming and XLaunch.

              6. And Install.

              And now we can run Xming:

              1. Select XLaunch from the Start Menu or by double-clicking the Desktop icon.

              2. Select Multiple Windows. This will open each application in a separate window.

              3. Select Start no client to make XLaunch wait for other programs (such as PuTTY).

              4. Select Clipboard to share the clipboard.

              5. Finally Save configuration into a file. You can keep the default filename and save it in your Xming installation directory.

              6. Now Xming is running in the background ... and you can launch a graphical application in your PuTTY terminal.

              7. Open a PuTTY terminal and connect to the HPC.

              8. In order to test the X-server, run \"xclock\". \"xclock\" is the standard GUI clock for the X Window System.

              xclock\n

              You should see the XWindow clock application appearing on your Windows machine. The \"xclock\" application runs on the login-node of the HPC, but is displayed on your Windows machine.

              You can close your clock and connect further to a compute node with again your X-forwarding enabled:

              $ qsub -I -X\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n$ hostname -f\nnode3501.doduo.gent.vsc\n$ xclock\n

              and you should see your clock again.

              "}, {"location": "running_interactive_jobs/#ssh-tunnel", "title": "SSH Tunnel", "text": "

              In order to work in client/server mode, it is often required to establish an SSH tunnel between your Windows desktop machine and the compute node your job is running on. PuTTY must have been installed on your computer, and you should be able to connect via SSH to the HPC cluster's login node.

              Because of one or more firewalls between your desktop and the HPC clusters, it is generally impossible to communicate directly with a process on the cluster from your desktop except when the network managers have given you explicit permission (which for security reasons is not often done). One way to work around this limitation is SSH tunnelling.

              There are several cases where this is useful:

              1. Running graphical applications on the cluster: The graphical program cannot directly communicate with the X Window server on your local system. In this case, the tunnelling is easy to set up as PuTTY will do it for you if you select the right options on the X11 settings page as explained on the page about text-mode access using PuTTY.

              2. Running a server application on the cluster that a client on the desktop connects to. One example of this scenario is ParaView in remote visualisation mode, with the interactive client on the desktop and the data processing and image rendering on the cluster. This scenario is explained on this page.

              3. Running clients on the cluster and a server on your desktop. In this case, the source port is a port on the cluster and the destination port is on the desktop.

              Procedure: A tunnel from a local client to a specific computer node on the cluster

              1. Log in on the login node via PuTTY.

              2. Start the server job, note the compute node's name the job is running on (e.g., node3501.doduo.gent.vsc), as well as the port the server is listening on (e.g., \"54321\").

              3. Set up the tunnel:

                1. Close your current PuTTY session.

                2. In the \"Category\" pane, expand Connection>SSh, and select as show below:

                3. In the Source port field, enter the local port to use (e.g., 5555).

                4. In the Destination field, enter : (e.g., node3501.doduo.gent.vsc:54321 as in the example above, these are the details you noted in the second step).

                5. Click the Add button.

                6. Click the Open button

                7. The tunnel is now ready to use.

                  "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

                  We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

                  Now run the message program:

                  cd ~/examples/Running_interactive_jobs\n./message.py\n

                  You should see the following message appearing.

                  Click any button and see what happens.

                  -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
                  "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

                  You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

                  "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

                  First go to the directory:

                  cd ~/examples/Running_jobs_with_input_output_data\n

                  Note

                  If the example directory is not yet present, copy it to your home directory:

                  ```

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

                  List and check the contents with:

                  $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

                  Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

                  file1.py
                  #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

                  The code of the Python script, is self explanatory:

                  1. In step 1, we write something to the file hello.txt in the current directory.

                  2. In step 2, we write some text to stdout.

                  3. In step 3, we write to stderr.

                  Check the contents of the first job script:

                  file1a.pbs
                  #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

                  You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

                  Submit it:

                  qsub file1a.pbs\n

                  After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

                  $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

                  Some observations:

                  1. The file Hello.txt was created in the current directory.

                  2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

                  3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

                  Inspect their contents ...\u00a0and remove the files

                  $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

                  Tip

                  Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

                  "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

                  Check the contents of the job script and execute it.

                  file1b.pbs
                  #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

                  Inspect the contents again ...\u00a0and remove the generated files:

                  $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

                  Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

                  "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

                  You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

                  file1c.pbs
                  #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
                  "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

                  The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

                  Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

                  The following locations are available:

                  Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

                  Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

                  We elaborate more on the specific function of these locations in the following sections.

                  Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

                  For documentation about VO directories, see the section on VO directories.

                  "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

                  Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

                  The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

                  The operating system also creates a few files and folders here to manage your account. Examples are:

                  File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

                  In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

                  The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

                  If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

                  To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

                  You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

                  Each type of scratch has its own use:

                  Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

                  Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

                  Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

                  Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

                  In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

                  $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

                  Now you should be able to access your files running

                  $ ls /UGent/yourugentusername\nhome shares www\n

                  Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

                  If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

                  kinit yourugentusername@UGENT.BE -r 7\n

                  You can verify your authentication ticket and expiry dates yourself by running klist

                  $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

                  Your ticket is valid for 10 hours, but you can renew it before it expires.

                  To renew your tickets, simply run

                  kinit -R\n

                  If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

                  krenew -b -K 60\n

                  Each hour the process will check if your ticket should be renewed.

                  We strongly advise to disable access to your shares once it is no longer needed:

                  kdestroy\n

                  If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

                  conda deactivate\n
                  "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

                  In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

                  $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

                  Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

                  $ ssh globus01 destroy\nSuccesfully destroyed session\n
                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

                  Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

                  To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

                  The rules are:

                  1. You will only receive a warning when you have reached the soft limit of either quota.

                  2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

                  3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

                  We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

                  "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

                  Tip

                  Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

                  In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

                  1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

                  2. repeat this action 30.000 times;

                  3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

                  $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

                  Tip

                  Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

                  In this exercise, you will

                  1. Generate the file \"primes_1.txt\" again as in the previous exercise;

                  2. open the file;

                  3. read it line by line;

                  4. calculate the average of primes in the line;

                  5. count the number of primes found per line;

                  6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job:

                  $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

                  The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

                  • the amount of disk space; and

                  • the number of files

                  that can be made available to each individual HPC user.

                  The quota of disk space and number of files for each HPC user is:

                  Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

                  Tip

                  The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

                  Tip

                  If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

                  "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

                  You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

                  VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

                  To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

                  Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

                  $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

                  This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

                  If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

                  $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

                  If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

                  $ du -s\n5632 .\n$ du -s -h\n

                  If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

                  $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

                  Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

                  $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
                  "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

                  Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

                  Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

                  To change the group of a directory and it's underlying directories and files, you can use:

                  chgrp -R groupname directory\n
                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
                  1. Get the group name you want to belong to.

                  2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
                  1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

                  2. Fill out the group name. This cannot contain spaces.

                  3. Put a description of your group in the \"Info\" field.

                  4. You will now be a member and moderator of your newly created group.

                  "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

                  Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

                  "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

                  You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

                  $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

                  We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

                  "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

                  A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
                  1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

                  2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
                  1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

                  2. Fill why you want to request a VO.

                  3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

                  4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

                  5. If the request is approved, you will now be a member and moderator of your newly created VO.

                  "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

                  If you're a moderator of a VO, you can request additional quota for the VO and its members.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

                  2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

                  3. Add a comment explaining why you need additional storage space and submit the form.

                  4. An HPC administrator will review your request and approve or deny it.

                  "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

                  VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

                  Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

                  2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

                  "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

                  When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

                  VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

                  VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

                  If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

                  "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

                  A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

                  "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

                  This section will explain how to create, activate, use and deactivate Python virtual environments.

                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

                  A Python virtual environment can be created with the following command:

                  python -m venv myenv      # Create a new virtual environment named 'myenv'\n

                  This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

                  Warning

                  When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

                  "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

                  To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

                  source myenv/bin/activate                    # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

                  After activating the virtual environment, you can install additional Python packages with pip install:

                  pip install example_package1\npip install example_package2\n

                  These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

                  It is now possible to run Python scripts that use the installed packages in the virtual environment.

                  Tip

                  When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

                  Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

                  To check if a package is available as a module, use:

                  module av package_name\n

                  Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

                  module show module_name\n

                  to check which extensions are included in a module (if any).

                  "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

                  Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

                  example.py
                  import example_package1\nimport example_package2\n...\n
                  python example.py\n
                  "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

                  When you are done using the virtual environment, you can deactivate it. To do that, run:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

                  You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\n...\n

                  We load a PyTorch package as a module and install Poutyne in a virtual environment:

                  module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

                  While the virtual environment is activated, we can run the script without any issues:

                  python pytorch_poutyne.py\n

                  Deactivate the virtual environment when you are done:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

                  To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

                  module swap cluster/donphan\nqsub -I\n

                  After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

                  Naming a virtual environment

                  When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

                  python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
                  "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

                  This section will combine the concepts discussed in the previous sections to:

                  1. Create a virtual environment on a specific cluster.
                  2. Combine packages installed in the virtual environment with modules.
                  3. Submit a job script that uses the virtual environment.

                  The example script that we will run is the following:

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

                  First, we create a virtual environment on the donphan cluster:

                  module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

                  Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

                  jobscript.pbs
                  #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

                  Next, we submit the job script:

                  qsub jobscript.pbs\n

                  Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

                  "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

                  Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

                  For example, if we create a virtual environment on the skitty cluster,

                  module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

                  return to the login node by pressing CTRL+D and try to use the virtual environment:

                  $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

                  we are presented with the illegal instruction error. More info on this here

                  "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

                  When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

                  python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

                  Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

                  "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

                  There are two main reasons why this error could occur.

                  1. You have not loaded the Python module that was used to create the virtual environment.
                  2. You loaded or unloaded modules while the virtual environment was activated.
                  "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

                  If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

                  The following commands illustrate this issue:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

                  module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

                  You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  The solution is to only modify modules when not in a virtual environment.

                  "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

                  Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

                  One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

                  For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

                  This documentation only covers aspects of using Singularity on the infrastructure.

                  "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

                  Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

                  The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

                  In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

                  If these limitations are a problem for you, please let us know via .

                  "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

                  All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

                  "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

                  Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

                  When you make Singularity or convert Docker images you have some restrictions:

                  • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
                  "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

                  For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

                  We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

                  "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

                  Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

                  ::: prompt :::

                  Create a job script like:

                  Create an example myscript.sh:

                  ::: code bash #!/bin/bash

                  # prime factors factor 1234567 :::

                  "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

                  We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

                  Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

                  ::: prompt :::

                  You can download linear_regression.py from the official Tensorflow repository.

                  "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

                  It is also possible to execute MPI jobs within a container, but the following requirements apply:

                  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

                  • Use modules within the container (install the environment-modules or lmod package in your container)

                  • Load the required module(s) before singularity execution.

                  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

                  Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

                  ::: prompt :::

                  For example to compile an MPI example:

                  ::: prompt :::

                  Example MPI job script:

                  "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

                  The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

                  As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

                  In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

                  In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

                  • Title and nickname
                  • Start and end date for your course or training
                  • VSC-ids of all teachers/trainers
                  • Participants based on UGent Course Code and/or list of VSC-ids
                  • Optional information
                    • Additional storage requirements
                      • Shared folder
                      • Groups folder for collaboration
                      • Quota
                    • Reservation for resource requirements beyond the interactive cluster
                    • Ticket number for specific software needed for your course/training
                    • Details for a custom Interactive Application in the webportal

                  In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

                  Please make these requests well in advance, several weeks before the start of your course/workshop.

                  "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

                  The title of the course or training can be used in e.g. reporting.

                  The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

                  When choosing the nickname, try to make it unique, but this is not enforced nor checked.

                  "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

                  The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

                  The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

                  • Course group and subgroups will be deactivated
                  • Residual data in the course directories will be archived or deleted
                  • Custom Interactive Applications will be disabled
                  "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

                  A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

                  This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

                  Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

                  "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

                  The management of the list of students or participants depends if this is a UGent course or a training/workshop.

                  "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

                  Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

                  The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

                  Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

                  A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

                  "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

                  (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

                  "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

                  For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

                  This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

                  Every course directory will always contain the folders:

                  • input
                    • ideally suited to distribute input data such as common datasets
                    • moderators have read/write access
                    • group members (students) only have read access
                  • members
                    • this directory contains a personal folder for every student in your course members/vsc<01234>
                    • only this specific VSC-id will have read/write access to this folder
                    • moderators have read access to this folder
                  "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

                  Optionally, we can also create these folders:

                  • shared
                    • this is a folder for sharing files between any and all group members
                    • all group members and moderators have read/write access
                    • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
                  • groups
                    • a number of groups/group_<01> folders are created under the groups folder
                    • these folders are suitable if you want to let your students collaborate closely in smaller groups
                    • each of these group_<01> folders are owned by a dedicated group
                    • teachers are automatically made moderators of these dedicated groups
                    • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
                    • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

                  If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

                  • shared: yes
                  • subgroups: <number of (sub)groups>
                  "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

                  There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

                  • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
                  • member quota (defaults 5 GB volume and 10k files) are per student/participant

                  The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

                  "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

                  The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

                  "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

                  We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

                  Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

                  Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

                  Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

                  "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

                  In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

                  We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

                  Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

                  "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

                  HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

                  A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

                  If you would like this for your course, provide more details in your teaching request, including:

                  • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

                  • which cluster you want to use

                  • how many nodes/cores/GPUs are needed

                  • which software modules you are loading

                  • custom code you are launching (e.g. autostart a GUI)

                  • required environment variables that you are setting

                  • ...

                  We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

                  A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

                  "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

                  Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

                  "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

                  Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

                  "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

                  Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

                  "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

                  Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

                  For example:

                  $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

                  "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

                  Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

                  Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

                  The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

                  example.sh:

                  #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

                  Running the following command:

                  $ qsub --dryrun example.sh -N example\n

                  will generate this output:

                  Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

                  With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

                  Similarly to the --dryrun example, we start by running the following command:

                  $ qsub --debug example.sh -N example\n

                  which generates this output:

                  DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
                  The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

                  "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

                  Below is a list of the most common and useful directives.

                  Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

                  TORQUE-related environment variables in batch job scripts.

                  # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

                  IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

                  When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

                  Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

                  Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

                  "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

                  When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

                  To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

                  Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

                  Other reasons why using more cores may not lead to a (significant) speedup include:

                  • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

                  • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

                  • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

                  • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

                  • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

                  • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

                  More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

                  "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

                  When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

                  Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

                  Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

                  Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

                  An example of how you can make beneficial use of multiple nodes can be found here.

                  You can also use MPI in Python, some useful packages that are also available on the HPC are:

                  • mpi4py
                  • Boost.MPI

                  We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

                  "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

                  If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

                  If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

                  "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

                  If you get from your job output an error message similar to this:

                  =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

                  This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

                  "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

                  Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

                  Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

                  "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

                  If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

                  If you have errors that look like:

                  vsc40000@login.hpc.ugent.be: Permission denied\n

                  or you are experiencing problems with connecting, here is a list of things to do that should help:

                  1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

                  2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

                  3. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

                  4. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

                  5. Make sure you are using the private key (not the public key) when trying to connect: If you followed the manual, the private key filename should end in .ppk (not in .pub).

                  6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

                  7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

                  If you are using PuTTY and get this error message:

                  server unexpectedly closed network connection\n

                  it is possible that the PuTTY version you are using is too old and doesn't support some required (security-related) features.

                  Make sure you are using the latest PuTTY version if you are encountering problems connecting (see Get PuTTY). If that doesn't help, please contact hpc@ugent.be.

                  If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

                  Please create a log file of your SSH session by following the steps in this article and include it in the email.

                  "}, {"location": "troubleshooting/#change-putty-private-key-for-a-saved-configuration", "title": "Change PuTTY private key for a saved configuration", "text": "
                  1. Open PuTTY

                  2. Single click on the saved configuration

                  3. Then click Load button

                  4. Expand SSH category (on the left panel) clicking on the \"+\" next to SSH

                  5. Click on Auth under the SSH category

                  6. On the right panel, click Browse button

                  7. Then search your private key on your computer (with the extension \".ppk\")

                  8. Go back to the top of category, and click Session

                  9. On the right panel, click on Save button

                  "}, {"location": "troubleshooting/#check-whether-your-private-key-in-putty-matches-the-public-key-on-the-accountpage", "title": "Check whether your private key in PuTTY matches the public key on the accountpage", "text": "

                  Follow the instructions in Change PuTTY private key for a saved configuration util item 5, then:

                  1. Single click on the textbox containing the path to your private key, then select all text (push Ctrl + a ), then copy the location of the private key (push Ctrl + c)

                  2. Open PuTTYgen

                  3. Enter menu item \"File\" and select \"Load Private key\"

                  4. On the \"Load private key\" popup, click in the textbox next to \"File name:\", then paste the location of your private key (push Ctrl + v), then click Open

                  5. Make sure that your Public key from the \"Public key for pasting into OpenSSH authorized_keys file\" textbox is in your \"Public keys\" section on the accountpage https://account.vscentrum.be. (Scroll down to the bottom of \"View Account\" tab, you will find there the \"Public keys\" section)

                  "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

                  If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

                  You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

                  If the fingerprint matches, click \"Yes\".

                  If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go client, you might get one of the following fingerprints:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

                  If you get a message \"Host key for server changed\", do not click \"No\" until you verified the fingerprint.

                  If the fingerprint matches, click \"No\", and in the next pop-up screen (\"if you accept the new host key...\"), press \"Yes\".

                  If it doesn't, or you are in doubt, take a screenshot, press \"Yes\" and contact hpc@ugent.be.

                  "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

                  If you get errors like:

                  $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

                  or

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

                  It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

                  "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

                  If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

                  The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

                  Make sure the fingerprint in the alert matches one of the following:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c
                  "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

                  To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

                  Note

                  Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

                  "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

                  If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

                  Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

                  You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

                  "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

                  See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

                  "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

                  Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

                  $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

                  This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

                  To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

                  As a rule of thumb, toolchains in the same row are compatible with each other:

                  GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

                  Example

                  we could load the following modules together:

                  ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

                  Another common error is:

                  $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

                  This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

                  "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

                  When running software provided through modules (see Modules), you may run into errors like:

                  $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

                  or errors like:

                  $ python\nIllegal instruction\n

                  When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

                  If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

                  If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

                  $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

                  This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

                  "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

                  When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

                  $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

                  When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

                  The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

                  Tip

                  To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

                  $ module swap env/slurm/donphan \n
                  instead of
                  $ module swap cluster/donphan \n

                  We recommend using a module swap cluster command after submitting the jobs.

                  This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

                  "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

                  All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

                  vsc40000@ln01[203] $\n

                  When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

                  Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

                  Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

                  $ echo This is a test\nThis is a test\n

                  Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

                  More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

                  $ ls --help \n$ man ls\n$ info ls\n

                  (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

                  "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

                  In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

                  Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

                  echo \"Hello! This is my hostname:\" \nhostname\n

                  You can type both lines at your shell prompt, and the result will be the following:

                  $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

                  Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

                  nano foo\n

                  or use the following commands:

                  echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

                  The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

                  $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  Congratulations, you just created and started your first shell script!

                  A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

                  You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

                  $ which bash\n/bin/bash\n

                  We edit our script and change it with this information:

                  #!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

                  Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

                  Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

                  chmod +x foo\n

                  Now you can start your script by simply executing it:

                  $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  The same technique can be used for all other scripting languages, like Perl and Python.

                  Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

                  "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

                  The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

                  Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

                  To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

                  Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

                  Through this web portal, you can:

                  • browse through the files & directories in your VSC account, and inspect, manage or change them;

                  • consult active jobs (across all HPC-UGent Tier-2 clusters);

                  • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

                  • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

                  • open a terminal session directly in your web browser;

                  More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

                  "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

                  All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

                  "}, {"location": "web_portal/#login", "title": "Login", "text": "

                  When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

                  "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

                  The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

                  Please click \"Authorize\" here.

                  This request will only be made once, you should not see this again afterwards.

                  "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

                  Once logged in, you should see this start page:

                  This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

                  If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

                  "}, {"location": "web_portal/#features", "title": "Features", "text": "

                  We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

                  "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

                  Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

                  The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

                  Here you can:

                  • Click a directory in the tree view on the left to open it;

                  • Use the buttons on the top to:

                    • go to a specific subdirectory by typing in the path (via Go To...);

                    • open the current directory in a terminal (shell) session (via Open in Terminal);

                    • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

                    • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

                    • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

                    • show the owner and permissions in the file listing (via Show Owner/Mode);

                  • Double-click a directory in the file listing to open that directory;

                  • Select one or more files and/or directories in the file listing, and:

                    • use the View button to see the contents (use the button at the top right to close the resulting popup window);

                    • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

                    • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

                    • use the Download button to download the selected files and directories from your VSC account to your local workstation;

                    • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

                    • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

                    • use the Delete button to (permanently!) remove the selected files and directories;

                  For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

                  "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

                  Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

                  For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

                  "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

                  To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

                  A new browser tab will be opened that shows all your current queued and/or running jobs:

                  You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

                  Jobs that are still queued or running can be deleted using the red button on the right.

                  Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

                  For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

                  "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

                  To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

                  This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

                  You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

                  Don't forget to actually submit your job to the system via the green Submit button!

                  "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

                  In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

                  "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

                  Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

                  Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

                  To exit the shell session, type exit followed by Enter and then close the browser tab.

                  Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

                  "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

                  To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

                  You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

                  Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

                  To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

                  "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

                  See dedicated page on Jupyter notebooks

                  "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

                  In case of problems with the web portal, it could help to restart the web server running in your VSC account.

                  You can do this via the Restart Web Server button under the Help menu item:

                  Of course, this only affects your own web portal session (not those of others).

                  "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
                  • ABAQUS for CAE course
                  "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

                  X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

                  1. A graphical remote desktop that works well over low bandwidth connections.

                  2. Copy/paste support from client to server and vice-versa.

                  3. File sharing from client to server.

                  4. Support for sound.

                  5. Printer sharing from client to server.

                  6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

                  "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

                  X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

                  X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

                  "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

                  After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

                  There are two ways to connect to the login node:

                  • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

                  • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

                  "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

                  This is the easier way to setup X2Go, a direct connection to the login node.

                  1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

                  2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

                  3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

                  4. Set the SSH port (22 by default).

                  5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

                    1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

                      You should look for your private SSH key generated by puttygen exported in \"OpenSSH\" format in Generating a public/private key pair (by default \"id_rsa\" (and not the \".ppk\" version)). Choose that file and click on open .

                  6. Check \"Try autologin\" option.

                  7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

                    1. [optional]: Set a single application like Terminal instead of XFCE desktop. This option is much better than PuTTY because the X2Go client includes copy-pasting support.

                  8. [optional]: Change the session icon.

                  9. Click the OK button after these changes.

                  "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

                  This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

                  1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

                  2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

                  3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

                    1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

                    2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

                    3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

                    4. Click the OK button after these changes.

                  "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

                  Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

                  X2Go will keep the session open for you (but only if the login node is not rebooted).

                  "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

                  If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

                  hostname\n

                  This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

                  "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

                  If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

                  "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

                  The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

                  To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

                  Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

                  After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

                  Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

                  "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

                  TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

                  Loads MNIST datasets and trains a neural network to recognize hand-written digits.

                  Runtime: ~1 min. on 8 cores (Intel Skylake)

                  See https://www.tensorflow.org/tutorials/quickstart/beginner

                  "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

                  Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

                  These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

                  The guide aims to make you familiar with the Linux command line environment quickly.

                  The tutorial goes through the following steps:

                  1. Getting Started
                  2. Navigating
                  3. Manipulating files and directories
                  4. Uploading files
                  5. Beyond the basics

                  Do not forget Common pitfalls, as this can save you some troubleshooting.

                  "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
                  • More on the HPC infrastructure.
                  • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
                  "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

                  Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

                  To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

                  First, it's important to make a distinction between two different output channels:

                  1. stdout: standard output channel, for regular output

                  2. stderr: standard error channel, for errors and warnings

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

                  > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

                  $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

                  >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

                  $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

                  < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

                  One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

                  $ find . -name .txt > files\n$ xargs grep banana < files\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

                  To redirect the stderr output (warnings, messages), you can use 2>, just like >

                  $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

                  To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

                  $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

                  Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

                  $ ls | wc -l\n    42\n

                  A common pattern is to pipe the output of a command to less so you can examine or search the output:

                  $ find . | less\n

                  Or to look through your command history:

                  $ history | less\n

                  You can put multiple pipes in the same line. For example, which cp commands have we run?

                  $ history | grep cp | less\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

                  The shell will expand certain things, including:

                  1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

                  2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

                  3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

                  4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

                  ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

                  $ ps -fu $USER\n

                  To see all the processes:

                  $ ps -elf\n

                  To see all the processes in a forest view, use:

                  $ ps auxf\n

                  The last two will spit out a lot of data, so get in the habit of piping it to less.

                  pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

                  pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

                  ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

                  $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

                  Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

                  $ kill -9 1234\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

                  top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

                  To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

                  There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

                  To exit top, use q (for 'quit').

                  For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

                  ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

                  $ ulimit -a\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

                  To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

                  $ wc example.txt\n      90     468     3189   example.txt\n

                  The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

                  To only count the number of lines, use wc -l:

                  $ wc -l example.txt\n      90    example.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

                  grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

                  $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

                  grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

                  cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

                  $ cut -f 1 -d ',' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

                  sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

                  $ sed 's/oldtext/newtext/g' myfile.txt\n

                  By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

                  "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

                  awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

                  First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

                  $ awk '{print $4}' mydata.dat\n

                  You can use -F ':' to change the delimiter (F for field separator).

                  The next example is used to sum numbers from a field:

                  $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

                  The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

                  However, there are some rules you need to abide by.

                  Here is a very detailed guide should you need more information.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

                  The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

                  #!/bin/sh\n
                  #!/bin/bash\n
                  #!/usr/bin/env bash\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

                  Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

                  if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
                  Or only if a certain variable is bigger than one:
                  if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
                  Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

                  In the initial example, we used -d to test if a directory existed. There are several more checks.

                  Another useful example, is to test if a variable contains a value (so it's not empty):

                  if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

                  the -z will check if the length of the variable's value is greater than zero.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

                  Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

                  Let's look at a simple example:

                  for i in 1 2 3\ndo\necho $i\ndone\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

                  Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

                  CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

                  In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

                  Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

                  Firstly a useful thing to know for debugging and testing is that you can run any command like this:

                  command 2>&1 output.log   # one single output file, both output and errors\n

                  If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

                  If you want regular and error output separated you can use:

                  command > output.log 2> output.err  # errors in a separate file\n

                  this will write regular output to output.log and error output to output.err.

                  You can then look for the errors with less or search for specific text with grep.

                  In scripts, you can use:

                  set -e\n

                  This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

                  Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

                  command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

                  If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

                  Examples include:

                  • modifying your $PS1 (to tweak your shell prompt)

                  • printing information about the current/jobs environment (echoing environment variables, etc.)

                  • selecting a specific cluster to run on with module swap cluster/...

                  Some recommendations:

                  • Avoid using module load statements in your $HOME/.bashrc file

                  • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

                  When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
                  #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
                  "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

                  The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

                  This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

                  #PBS -l nodes=1:ppn=1 # single-core\n

                  For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

                  #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

                  We intend to submit it on the long queue:

                  #PBS -q long\n

                  We request a total running time of 48 hours (2 days).

                  #PBS -l walltime=48:00:00\n

                  We specify a desired name of our job:

                  #PBS -N FreeSurfer_per_subject-time-longitudinal\n
                  This specifies mail options:
                  #PBS -m abe\n

                  1. a means mail is sent when the job is aborted.

                  2. b means mail is sent when the job begins.

                  3. e means mail is sent when the job ends.

                  Joins error output with regular output:

                  #PBS -j oe\n

                  All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
                  1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

                  2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

                  3. How many files and directories are in /tmp?

                  4. What's the name of the 5th file/directory in alphabetical order in /tmp?

                  5. List all files that start with t in /tmp.

                  6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

                  7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

                  "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

                  This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

                  "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

                  If you receive an error message which contains something like the following:

                  No such file or directory\n

                  It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

                  Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

                  "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

                  Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

                  $ cat some file\nNo such file or directory 'some'\n

                  Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

                  $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

                  This is especially error-prone if you are piping results of find:

                  $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

                  This can be worked around using the -print0 flag:

                  $ find . -type f -print0 | xargs -0 cat\n...\n

                  But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

                  "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

                  If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

                  $ rm -r ~/$PROJETC/*\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

                  A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

                  $ #rm -r ~/$POROJETC/*\n
                  Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

                  "}, {"location": "linux-tutorial/common_pitfalls/#copying-files-with-winscp", "title": "Copying files with WinSCP", "text": "

                  After copying files from a windows machine, a file might look funny when looking at it on the cluster.

                  $ cat script.sh\n#!/bin/bash^M\n#PBS -l nodes^M\n...\n

                  Or you can get errors like:

                  $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

                  See section dos2unix to fix these errors with dos2unix.

                  "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
                  $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

                  Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

                  $ chmod +x script_name.sh\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

                  If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

                  If you need help about a certain command, you should consult its so-called \"man page\":

                  $ man command\n

                  This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

                  Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

                  "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
                  1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

                  2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

                  3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

                  4. basic shell usage

                  5. Bash for beginners

                  6. MOOC

                  Please don't hesitate to contact in case of questions or problems.

                  "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

                  To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

                  You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

                  Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

                  "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

                  To get help:

                  1. use the documentation available on the system, through the help, info and man commands (use q to exit).
                    help cd \ninfo ls \nman cp \n
                  2. use Google

                  3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

                  "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

                  Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

                  "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

                  The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

                  You use the shell by executing commands, and hitting <enter>. For example:

                  $ echo hello \nhello \n

                  You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

                  To go through previous commands, use <up> and <down>, rather than retyping them.

                  "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

                  A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

                  $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

                  "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

                  If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

                  "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

                  At the prompt we also have access to shell variables, which have both a name and a value.

                  They can be thought of as placeholders for things we need to remember.

                  For example, to print the path to your home directory, we can use the shell variable named HOME:

                  $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

                  This prints the value of this variable.

                  "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

                  There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

                  For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

                  $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

                  You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

                  $ env | sort | grep VSC\n

                  But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

                  $ export MYVARIABLE=\"value\"\n

                  It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

                  If we then do

                  $ echo $MYVARIABLE\n

                  this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

                  "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

                  You can change what your prompt looks like by redefining the special-purpose variable $PS1.

                  For example: to include the current location in your prompt:

                  $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

                  Note that ~ is short representation of your home directory.

                  To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

                  $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

                  "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

                  One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

                  This may lead to surprising results, for example:

                  $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

                  To understand what's going on here, see the section on cd below.

                  The moral here is: be very careful to not use empty variables unintentionally.

                  Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

                  The -e option will result in the script getting stopped if any command fails.

                  The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

                  More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

                  "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

                  If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

                  "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

                  Basic information about the system you are logged into can be obtained in a variety of ways.

                  We limit ourselves to determining the hostname:

                  $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

                  And querying some basic information about the Linux kernel:

                  $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

                  "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
                  • Print the full path to your home directory
                  • Determine the name of the environment variable to your personal scratch directory
                  • What's the name of the system you\\'re logged into? Is it the same for everyone?
                  • Figure out how to print the value of a variable without including a newline
                  • How do you get help on using the man command?

                  Next chapter teaches you on how to navigate.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

                  Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

                  If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

                  Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

                  To figure out where your quota is being spent, the du (isk sage) command can come in useful:

                  $ du -sh test\n59M test\n

                  Do not (frequently) run du on directories where large amounts of data are stored, since that will:

                  1. take a long time

                  2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

                  Software is provided through so-called environment modules.

                  The most commonly used commands are:

                  1. module avail: show all available modules

                  2. module avail <software name>: show available modules for a specific software name

                  3. module list: show list of loaded modules

                  4. module load <module name>: load a particular module

                  More information is available in section Modules.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

                  The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

                  Detailed information is available in section submitting your job.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

                  Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

                  Hint: python -c \"print(sum(range(1, 101)))\"

                  • How many modules are available for Python version 3.6.4?
                  • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
                  • Which cluster modules are available?

                  • What's the full path to your personal home/data/scratch directories?

                  • Determine how large your personal directories are.
                  • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

                  Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

                  To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

                  $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

                  To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
                  $ cp source target\n

                  This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

                  $ cp -r sourceDirectory target\n

                  A last more complicated example:

                  $ cp -a sourceDirectory target\n

                  Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
                  $ mkdir directory\n

                  which will create a directory with the given name inside the current directory.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
                  $ mv source target\n

                  mv will move the source path to the destination path. Works for both directories as files.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

                  Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

                  $ rm filename\n
                  rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

                  You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

                  $ rmdir directory\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

                  Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

                  1. User - a particular user (account)

                  2. Group - a particular group of users (may be user-specific group with only one member)

                  3. Other - other users in the system

                  The permission types are:

                  1. Read - For files, this gives permission to read the contents of a file

                  2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

                  3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

                  Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

                  $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

                  The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

                  Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

                  $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

                  You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

                  You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

                  However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

                  $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  This will give the user otheruser permissions to write to Project_GoldenDragon

                  Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

                  Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

                  See https://linux.die.net/man/1/setfacl for more information.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

                  Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

                  $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

                  Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

                  $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

                  Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

                  $ unzip myfile.zip\n

                  If we would like to make our own zip archive, we use zip:

                  $ zip myfiles.zip myfile1 myfile2 myfile3\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

                  Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

                  You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

                  $ tar -xf tarfile.tar\n

                  Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

                  $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

                  Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

                  # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

                  If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

                  $ tar -c source1 source2 source3 -f tarfile.tar\n
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
                  1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

                  2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

                  3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

                  4. Remove the another/test directory with a single command.

                  5. Rename test to test2. Move test2/hostname.txt to your home directory.

                  6. Change the permission of test2 so only you can access it.

                  7. Create an empty job script named job.sh, and make it executable.

                  8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

                  The next chapter is on uploading files, especially important when using HPC-infrastructure.

                  "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

                  This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

                  "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

                  To print the current directory, use pwd or \\$PWD:

                  $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

                  "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

                  A very basic and commonly used command is ls, which can be used to list files and directories.

                  In its basic usage, it just prints the names of files and directories in the current directory. For example:

                  $ ls\nafile.txt some_directory \n

                  When provided an argument, it can be used to list the contents of a directory:

                  $ ls some_directory \none.txt two.txt\n

                  A couple of commonly used options include:

                  • detailed listing using ls -l:

                    $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • To print the size information in human-readable form, use the -h flag:

                    $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • also listing hidden files using the -a flag:

                    $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • ordering files by the most recent change using -rt:

                    $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

                  If you try to use ls on a file that doesn't exist, you will get a clear error message:

                  $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
                  "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

                  To change to a different directory, you can use the cd command:

                  $ cd some_directory\n

                  To change back to the previous directory you were in, there's a shortcut: cd -

                  Using cd without an argument results in returning back to your home directory:

                  $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

                  "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

                  The file command can be used to inspect what type of file you're dealing with:

                  $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
                  "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

                  An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

                  Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

                  A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

                  Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

                  There are two special relative paths worth mentioning:

                  • . is a shorthand for the current directory
                  • .. is a shorthand for the parent of the current directory

                  You can also use .. when constructing relative paths, for example:

                  $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
                  "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

                  Each file and directory has particular permissions set on it, which can be queried using ls -l.

                  For example:

                  $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

                  The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

                  1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
                  2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
                  3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
                  4. the 3rd part r-- indicates that other users only have read permissions

                  The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

                  1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
                  2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

                  See also the chmod command later in this manual.

                  "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

                  find will crawl a series of directories and lists files matching given criteria.

                  For example, to look for the file named one.txt:

                  $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

                  To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

                  $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

                  A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

                  "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
                  • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
                  • When was your home directory created or last changed?
                  • Determine the name of the last changed file in /tmp.
                  • See how home directories are organised. Can you access the home directory of other users?

                  The next chapter will teach you how to interact with files and directories.

                  "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

                  To transfer files from and to the HPC, see the section about transferring files of the HPC manual

                  "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

                  After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

                  For example, you may see an error when submitting a job script that was edited on Windows:

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

                  To fix this problem, you should run the dos2unix command on the file:

                  $ dos2unix filename\n
                  "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

                  As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop and they look like directories in WinSCP) pointing to the respective storages:

                  $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
                  "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

                  Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

                  1. Open (\"Read\"): ^R

                  2. Save (\"Write Out\"): ^O

                  3. Exit: ^X

                  More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

                  "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

                  rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

                  You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

                  For example, to copy a folder with lots of CSV files:

                  $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

                  will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

                  The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

                  To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

                  To copy files to your local computer, you can also use rsync:

                  $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
                  This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

                  See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

                  "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
                  1. Download the file /etc/hostname to your local computer.

                  2. Upload a file to a subdirectory of your personal $VSC_DATA space.

                  3. Create a file named hello.txt and edit it using nano.

                  Now you have a basic understanding, see next chapter for some more in depth concepts.

                  "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

                  In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

                  This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

                  If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

                  donphan is the new debug/interactive cluster.

                  It replaces slaking, which will be retired on Monday 22 May 2023.

                  It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
                  • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
                  • ~738 GiB of RAM memory;
                  • 1.6TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/donphan\n

                  You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

                  The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

                  Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

                  "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

                  The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

                  "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

                  By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

                  The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

                  The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

                  "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

                  Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

                  This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

                  Warning

                  Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

                  There are no strong security guarantees regarding data protection when using this shared GPU!

                  "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

                  gallade is the new large-memory cluster.

                  It replaces kirlia, which will be retired on Monday 22 May 2023.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
                  • ~940 GiB of RAM memory;
                  • 1.5TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/gallade\n

                  You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

                  The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

                  As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

                  Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

                  "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

                  Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

                  It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

                  "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

                  In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

                  This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

                  If you have any questions on using shinx, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

                  shinx is a new CPU-only cluster.

                  It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

                  It is primarily for regular CPU compute use.

                  This cluster consists of 48 workernodes, each with:

                  • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
                  • ~360 GiB of RAM memory;
                  • 400GB local disk;
                  • NDR-200 InfiniBand interconnect;
                  • RHEL9 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/shinx\n

                  You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

                  "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

                  The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

                  Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

                  "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

                  The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

                  The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

                  "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

                  As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

                  Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

                  "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
                  • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
                  "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

                  As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

                  module swap cluster/.shinx\n

                  Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

                  As such, we will have an extended pilot phase in 3 stages:

                  "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
                  • Minimal cluster to test software and nodes

                    • Only 2 or 3 nodes available
                    • FDR or EDR infiniband network
                    • EL8 OS
                  • Retirement of swalot cluster (as of 01 November 2023)

                  • Racking of stage 1 nodes
                  "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
                  • 2/3 cluster size

                    • 32 nodes (with max job size of 16 nodes)
                    • EDR Infiniband
                    • EL8 OS
                  • Retirement of victini (as of 05 February 2023)

                  • Racking of last 16 nodes
                  • Installation of NDR/NDR-200 infiniband network
                  "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
                  • Full size cluster

                    • 48 nodes (no job size limit)
                    • NDR-200 Infiniband (single switch Infiniband topology)
                    • EL9 OS
                  • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

                  "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
                  • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
                  "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

                  For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

                  module swap env/software/doduo\n

                  We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

                  "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

                  This table gives an overview of all the available software on the different clusters.

                  "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABAQUS, load one of these modules using a module load command like:

                  module load ABAQUS/2023\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABINIT, load one of these modules using a module load command like:

                  module load ABINIT/9.10.3-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRA2, load one of these modules using a module load command like:

                  module load ABRA2/2.23-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRicate, load one of these modules using a module load command like:

                  module load ABRicate/0.9.9-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABySS, load one of these modules using a module load command like:

                  module load ABySS/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ACTC, load one of these modules using a module load command like:

                  module load ACTC/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ADMIXTURE, load one of these modules using a module load command like:

                  module load ADMIXTURE/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AICSImageIO, load one of these modules using a module load command like:

                  module load AICSImageIO/4.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMAPVox, load one of these modules using a module load command like:

                  module load AMAPVox/1.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMICA, load one of these modules using a module load command like:

                  module load AMICA/2024.1.19-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMOS, load one of these modules using a module load command like:

                  module load AMOS/3.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMPtk, load one of these modules using a module load command like:

                  module load AMPtk/1.5.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTLR, load one of these modules using a module load command like:

                  module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTs, load one of these modules using a module load command like:

                  module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR-util, load one of these modules using a module load command like:

                  module load APR-util/1.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR, load one of these modules using a module load command like:

                  module load APR/1.7.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ARAGORN, load one of these modules using a module load command like:

                  module load ARAGORN/1.2.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASCAT, load one of these modules using a module load command like:

                  module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASE, load one of these modules using a module load command like:

                  module load ASE/3.22.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ATK, load one of these modules using a module load command like:

                  module load ATK/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AUGUSTUS, load one of these modules using a module load command like:

                  module load AUGUSTUS/3.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Abseil, load one of these modules using a module load command like:

                  module load Abseil/20230125.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AdapterRemoval, load one of these modules using a module load command like:

                  module load AdapterRemoval/2.3.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Albumentations, load one of these modules using a module load command like:

                  module load Albumentations/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaFold, load one of these modules using a module load command like:

                  module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaPulldown, load one of these modules using a module load command like:

                  module load AlphaPulldown/0.30.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Altair-EDEM, load one of these modules using a module load command like:

                  module load Altair-EDEM/2021.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Amber, load one of these modules using a module load command like:

                  module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberMini, load one of these modules using a module load command like:

                  module load AmberMini/16.16.0-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberTools, load one of these modules using a module load command like:

                  module load AmberTools/20-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Anaconda3, load one of these modules using a module load command like:

                  module load Anaconda3/2023.03-1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Annocript, load one of these modules using a module load command like:

                  module load Annocript/2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArchR, load one of these modules using a module load command like:

                  module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Archive-Zip, load one of these modules using a module load command like:

                  module load Archive-Zip/1.68-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arlequin, load one of these modules using a module load command like:

                  module load Arlequin/3.5.2.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Armadillo, load one of these modules using a module load command like:

                  module load Armadillo/12.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arrow, load one of these modules using a module load command like:

                  module load Arrow/14.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArviZ, load one of these modules using a module load command like:

                  module load ArviZ/0.16.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Aspera-CLI, load one of these modules using a module load command like:

                  module load Aspera-CLI/3.9.6.1467.159c5b1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoDock-Vina, load one of these modules using a module load command like:

                  module load AutoDock-Vina/1.2.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoGeneS, load one of these modules using a module load command like:

                  module load AutoGeneS/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoMap, load one of these modules using a module load command like:

                  module load AutoMap/1.0-foss-2019b-20200324\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autoconf, load one of these modules using a module load command like:

                  module load Autoconf/2.71-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Automake, load one of these modules using a module load command like:

                  module load Automake/1.16.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autotools, load one of these modules using a module load command like:

                  module load Autotools/20220317-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Avogadro2, load one of these modules using a module load command like:

                  module load Avogadro2/1.97.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BAMSurgeon, load one of these modules using a module load command like:

                  module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BBMap, load one of these modules using a module load command like:

                  module load BBMap/39.01-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BCFtools, load one of these modules using a module load command like:

                  module load BCFtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BDBag, load one of these modules using a module load command like:

                  module load BDBag/1.6.3-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDOPS, load one of these modules using a module load command like:

                  module load BEDOPS/2.4.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDTools, load one of these modules using a module load command like:

                  module load BEDTools/2.31.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAST+, load one of these modules using a module load command like:

                  module load BLAST+/2.14.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAT, load one of these modules using a module load command like:

                  module load BLAT/3.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLIS, load one of these modules using a module load command like:

                  module load BLIS/0.9.0-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BRAKER, load one of these modules using a module load command like:

                  module load BRAKER/2.1.6-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSMAPz, load one of these modules using a module load command like:

                  module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSseeker2, load one of these modules using a module load command like:

                  module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUSCO, load one of these modules using a module load command like:

                  module load BUSCO/5.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUStools, load one of these modules using a module load command like:

                  module load BUStools/0.43.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BWA, load one of these modules using a module load command like:

                  module load BWA/0.7.17-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BamTools, load one of these modules using a module load command like:

                  module load BamTools/2.5.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bambi, load one of these modules using a module load command like:

                  module load Bambi/0.7.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bandage, load one of these modules using a module load command like:

                  module load Bandage/0.9.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BatMeth2, load one of these modules using a module load command like:

                  module load BatMeth2/2.1-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScEnv, load one of these modules using a module load command like:

                  module load BayeScEnv/1.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScan, load one of these modules using a module load command like:

                  module load BayeScan/2.1-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesAss3-SNPs, load one of these modules using a module load command like:

                  module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesPrism, load one of these modules using a module load command like:

                  module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bazel, load one of these modules using a module load command like:

                  module load Bazel/6.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Beast, load one of these modules using a module load command like:

                  module load Beast/2.7.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BeautifulSoup, load one of these modules using a module load command like:

                  module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BerkeleyGW, load one of these modules using a module load command like:

                  module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BiG-SCAPE, load one of these modules using a module load command like:

                  module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BigDFT, load one of these modules using a module load command like:

                  module load BigDFT/1.9.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BinSanity, load one of these modules using a module load command like:

                  module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-DB-HTS, load one of these modules using a module load command like:

                  module load Bio-DB-HTS/3.01-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-EUtilities, load one of these modules using a module load command like:

                  module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

                  module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BioPerl, load one of these modules using a module load command like:

                  module load BioPerl/1.7.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Biopython, load one of these modules using a module load command like:

                  module load Biopython/1.83-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bismark, load one of these modules using a module load command like:

                  module load Bismark/0.23.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bison, load one of these modules using a module load command like:

                  module load Bison/3.8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blender, load one of these modules using a module load command like:

                  module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Block, load one of these modules using a module load command like:

                  module load Block/1.5.3-20200525-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc, load one of these modules using a module load command like:

                  module load Blosc/1.21.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc2, load one of these modules using a module load command like:

                  module load Blosc2/2.6.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonito, load one of these modules using a module load command like:

                  module load Bonito/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonnie++, load one of these modules using a module load command like:

                  module load Bonnie++/2.00a-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.MPI, load one of these modules using a module load command like:

                  module load Boost.MPI/1.81.0-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python-NumPy, load one of these modules using a module load command like:

                  module load Boost.Python-NumPy/1.79.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python, load one of these modules using a module load command like:

                  module load Boost.Python/1.79.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost, load one of these modules using a module load command like:

                  module load Boost/1.82.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bottleneck, load one of these modules using a module load command like:

                  module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie, load one of these modules using a module load command like:

                  module load Bowtie/1.3.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie2, load one of these modules using a module load command like:

                  module load Bowtie2/2.4.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bracken, load one of these modules using a module load command like:

                  module load Bracken/2.9-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli-python, load one of these modules using a module load command like:

                  module load Brotli-python/1.0.9-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli, load one of these modules using a module load command like:

                  module load Brotli/1.1.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brunsli, load one of these modules using a module load command like:

                  module load Brunsli/0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CASPR, load one of these modules using a module load command like:

                  module load CASPR/20200730-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CCL, load one of these modules using a module load command like:

                  module load CCL/1.12.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CD-HIT, load one of these modules using a module load command like:

                  module load CD-HIT/4.8.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDAT, load one of these modules using a module load command like:

                  module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDBtools, load one of these modules using a module load command like:

                  module load CDBtools/0.99-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDO, load one of these modules using a module load command like:

                  module load CDO/2.0.5-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CENSO, load one of these modules using a module load command like:

                  module load CENSO/1.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CESM-deps, load one of these modules using a module load command like:

                  module load CESM-deps/2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFDEMcoupling, load one of these modules using a module load command like:

                  module load CFDEMcoupling/3.8.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFITSIO, load one of these modules using a module load command like:

                  module load CFITSIO/4.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGAL, load one of these modules using a module load command like:

                  module load CGAL/5.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGmapTools, load one of these modules using a module load command like:

                  module load CGmapTools/0.1.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRCexplorer2, load one of these modules using a module load command like:

                  module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRI-long, load one of these modules using a module load command like:

                  module load CIRI-long/1.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRIquant, load one of these modules using a module load command like:

                  module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CITE-seq-Count, load one of these modules using a module load command like:

                  module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLEAR, load one of these modules using a module load command like:

                  module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLHEP, load one of these modules using a module load command like:

                  module load CLHEP/2.4.6.4-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMAverse, load one of these modules using a module load command like:

                  module load CMAverse/20220112-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMSeq, load one of these modules using a module load command like:

                  module load CMSeq/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMake, load one of these modules using a module load command like:

                  module load CMake/3.27.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using COLMAP, load one of these modules using a module load command like:

                  module load COLMAP/3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CONCOCT, load one of these modules using a module load command like:

                  module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CP2K, load one of these modules using a module load command like:

                  module load CP2K/2023.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPC2, load one of these modules using a module load command like:

                  module load CPC2/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPLEX, load one of these modules using a module load command like:

                  module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPPE, load one of these modules using a module load command like:

                  module load CPPE/0.3.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CREST, load one of these modules using a module load command like:

                  module load CREST/2.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPR-DAV, load one of these modules using a module load command like:

                  module load CRISPR-DAV/2.3.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPResso2, load one of these modules using a module load command like:

                  module load CRISPResso2/2.2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRYSTAL17, load one of these modules using a module load command like:

                  module load CRYSTAL17/1.0.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CSBDeep, load one of these modules using a module load command like:

                  module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDA, load one of these modules using a module load command like:

                  module load CUDA/12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDAcore, load one of these modules using a module load command like:

                  module load CUDAcore/11.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUnit, load one of these modules using a module load command like:

                  module load CUnit/2.1-3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CVXOPT, load one of these modules using a module load command like:

                  module load CVXOPT/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Calib, load one of these modules using a module load command like:

                  module load Calib/0.3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cantera, load one of these modules using a module load command like:

                  module load Cantera/3.0.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CapnProto, load one of these modules using a module load command like:

                  module load CapnProto/1.0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cartopy, load one of these modules using a module load command like:

                  module load Cartopy/0.22.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Casanovo, load one of these modules using a module load command like:

                  module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatBoost, load one of these modules using a module load command like:

                  module load CatBoost/1.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatLearn, load one of these modules using a module load command like:

                  module load CatLearn/0.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatMAP, load one of these modules using a module load command like:

                  module load CatMAP/20220519-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Catch2, load one of these modules using a module load command like:

                  module load Catch2/2.13.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cbc, load one of these modules using a module load command like:

                  module load Cbc/2.10.11-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellBender, load one of these modules using a module load command like:

                  module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellOracle, load one of these modules using a module load command like:

                  module load CellOracle/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellProfiler, load one of these modules using a module load command like:

                  module load CellProfiler/4.2.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger-ATAC, load one of these modules using a module load command like:

                  module load CellRanger-ATAC/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger, load one of these modules using a module load command like:

                  module load CellRanger/7.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRank, load one of these modules using a module load command like:

                  module load CellRank/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellTypist, load one of these modules using a module load command like:

                  module load CellTypist/1.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cellpose, load one of these modules using a module load command like:

                  module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Centrifuge, load one of these modules using a module load command like:

                  module load Centrifuge/1.0.4-beta-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cereal, load one of these modules using a module load command like:

                  module load Cereal/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ceres-Solver, load one of these modules using a module load command like:

                  module load Ceres-Solver/2.2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cgl, load one of these modules using a module load command like:

                  module load Cgl/0.60.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CharLS, load one of these modules using a module load command like:

                  module load CharLS/2.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheMPS2, load one of these modules using a module load command like:

                  module load CheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Check, load one of these modules using a module load command like:

                  module load Check/0.15.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheckM, load one of these modules using a module load command like:

                  module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Chimera, load one of these modules using a module load command like:

                  module load Chimera/1.16-linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circlator, load one of these modules using a module load command like:

                  module load Circlator/1.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circuitscape, load one of these modules using a module load command like:

                  module load Circuitscape/5.12.3-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clair3, load one of these modules using a module load command like:

                  module load Clair3/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clang, load one of these modules using a module load command like:

                  module load Clang/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clp, load one of these modules using a module load command like:

                  module load Clp/1.17.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clustal-Omega, load one of these modules using a module load command like:

                  module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ClustalW2, load one of these modules using a module load command like:

                  module load ClustalW2/2.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CmdStanR, load one of these modules using a module load command like:

                  module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CodAn, load one of these modules using a module load command like:

                  module load CodAn/1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoinUtils, load one of these modules using a module load command like:

                  module load CoinUtils/2.11.10-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ColabFold, load one of these modules using a module load command like:

                  module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CompareM, load one of these modules using a module load command like:

                  module load CompareM/0.1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

                  module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Concorde, load one of these modules using a module load command like:

                  module load Concorde/20031219-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoordgenLibs, load one of these modules using a module load command like:

                  module load CoordgenLibs/3.0.1-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CopyKAT, load one of these modules using a module load command like:

                  module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Coreutils, load one of these modules using a module load command like:

                  module load Coreutils/8.32-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CppUnit, load one of these modules using a module load command like:

                  module load CppUnit/1.15.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CuPy, load one of these modules using a module load command like:

                  module load CuPy/8.5.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cufflinks, load one of these modules using a module load command like:

                  module load Cufflinks/20190706-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cython, load one of these modules using a module load command like:

                  module load Cython/3.0.8-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DALI, load one of these modules using a module load command like:

                  module load DALI/2.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DAS_Tool, load one of these modules using a module load command like:

                  module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB, load one of these modules using a module load command like:

                  module load DB/18.1.40-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBD-mysql, load one of these modules using a module load command like:

                  module load DBD-mysql/4.050-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBG2OLC, load one of these modules using a module load command like:

                  module load DBG2OLC/20200724-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB_File, load one of these modules using a module load command like:

                  module load DB_File/1.858-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBus, load one of these modules using a module load command like:

                  module load DBus/1.15.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DETONATE, load one of these modules using a module load command like:

                  module load DETONATE/1.11-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DFT-D3, load one of these modules using a module load command like:

                  module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIA-NN, load one of these modules using a module load command like:

                  module load DIA-NN/1.8.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIALOGUE, load one of these modules using a module load command like:

                  module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIAMOND, load one of these modules using a module load command like:

                  module load DIAMOND/2.1.8-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIANA, load one of these modules using a module load command like:

                  module load DIANA/10.5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIRAC, load one of these modules using a module load command like:

                  module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DL_POLY_Classic, load one of these modules using a module load command like:

                  module load DL_POLY_Classic/1.10-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DMCfun, load one of these modules using a module load command like:

                  module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DOLFIN, load one of these modules using a module load command like:

                  module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DRAGMAP, load one of these modules using a module load command like:

                  module load DRAGMAP/1.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DROP, load one of these modules using a module load command like:

                  module load DROP/1.1.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DUBStepR, load one of these modules using a module load command like:

                  module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dakota, load one of these modules using a module load command like:

                  module load Dakota/6.16.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dalton, load one of these modules using a module load command like:

                  module load Dalton/2020.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DeepLoc, load one of these modules using a module load command like:

                  module load DeepLoc/2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Delly, load one of these modules using a module load command like:

                  module load Delly/0.8.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DendroPy, load one of these modules using a module load command like:

                  module load DendroPy/4.6.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DensPart, load one of these modules using a module load command like:

                  module load DensPart/20220603-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Deprecated, load one of these modules using a module load command like:

                  module load Deprecated/1.2.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DiCE-ML, load one of these modules using a module load command like:

                  module load DiCE-ML/0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dice, load one of these modules using a module load command like:

                  module load Dice/20240101-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DoubletFinder, load one of these modules using a module load command like:

                  module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Doxygen, load one of these modules using a module load command like:

                  module load Doxygen/1.9.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dsuite, load one of these modules using a module load command like:

                  module load Dsuite/20210718-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DualSPHysics, load one of these modules using a module load command like:

                  module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DyMat, load one of these modules using a module load command like:

                  module load DyMat/0.7-foss-2021b-2020-12-12\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EDirect, load one of these modules using a module load command like:

                  module load EDirect/20.5.20231006-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ELPA, load one of these modules using a module load command like:

                  module load ELPA/2021.05.001-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EMBOSS, load one of these modules using a module load command like:

                  module load EMBOSS/6.6.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESM-2, load one of these modules using a module load command like:

                  module load ESM-2/2.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMF, load one of these modules using a module load command like:

                  module load ESMF/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMPy, load one of these modules using a module load command like:

                  module load ESMPy/8.0.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ETE, load one of these modules using a module load command like:

                  module load ETE/3.1.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EUKulele, load one of these modules using a module load command like:

                  module load EUKulele/2.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EasyBuild, load one of these modules using a module load command like:

                  module load EasyBuild/4.9.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Eigen, load one of these modules using a module load command like:

                  module load Eigen/3.4.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Elk, load one of these modules using a module load command like:

                  module load Elk/7.0.12-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EpiSCORE, load one of these modules using a module load command like:

                  module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

                  module load Excel-Writer-XLSX/1.09-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Exonerate, load one of these modules using a module load command like:

                  module load Exonerate/2.4.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ExtremeLy, load one of these modules using a module load command like:

                  module load ExtremeLy/2.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FALCON, load one of these modules using a module load command like:

                  module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTA, load one of these modules using a module load command like:

                  module load FASTA/36.3.8i-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTX-Toolkit, load one of these modules using a module load command like:

                  module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FDS, load one of these modules using a module load command like:

                  module load FDS/6.8.0-intel-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FEniCS, load one of these modules using a module load command like:

                  module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFAVES, load one of these modules using a module load command like:

                  module load FFAVES/2022.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFC, load one of these modules using a module load command like:

                  module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW.MPI, load one of these modules using a module load command like:

                  module load FFTW.MPI/3.3.10-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW, load one of these modules using a module load command like:

                  module load FFTW/3.3.10-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFmpeg, load one of these modules using a module load command like:

                  module load FFmpeg/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIAT, load one of these modules using a module load command like:

                  module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIGARO, load one of these modules using a module load command like:

                  module load FIGARO/1.1.2-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAC, load one of these modules using a module load command like:

                  module load FLAC/1.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAIR, load one of these modules using a module load command like:

                  module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLANN, load one of these modules using a module load command like:

                  module load FLANN/1.9.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLASH, load one of these modules using a module load command like:

                  module load FLASH/2.2.00-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLTK, load one of these modules using a module load command like:

                  module load FLTK/1.3.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLUENT, load one of these modules using a module load command like:

                  module load FLUENT/2023R1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMM3D, load one of these modules using a module load command like:

                  module load FMM3D/20211018-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMPy, load one of these modules using a module load command like:

                  module load FMPy/0.3.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FSL, load one of these modules using a module load command like:

                  module load FSL/6.0.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FabIO, load one of these modules using a module load command like:

                  module load FabIO/0.11.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Faiss, load one of these modules using a module load command like:

                  module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastANI, load one of these modules using a module load command like:

                  module load FastANI/1.34-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastME, load one of these modules using a module load command like:

                  module load FastME/2.1.6.3-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQC, load one of these modules using a module load command like:

                  module load FastQC/0.11.9-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQ_Screen, load one of these modules using a module load command like:

                  module load FastQ_Screen/0.14.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastTree, load one of these modules using a module load command like:

                  module load FastTree/2.1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastViromeExplorer, load one of these modules using a module load command like:

                  module load FastViromeExplorer/20180422-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fastaq, load one of these modules using a module load command like:

                  module load Fastaq/3.17.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiji, load one of these modules using a module load command like:

                  module load Fiji/2.9.0-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Filtlong, load one of these modules using a module load command like:

                  module load Filtlong/0.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiona, load one of these modules using a module load command like:

                  module load Fiona/1.9.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flask, load one of these modules using a module load command like:

                  module load Flask/2.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FlexiBLAS, load one of these modules using a module load command like:

                  module load FlexiBLAS/3.3.1-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flye, load one of these modules using a module load command like:

                  module load Flye/2.9.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FragGeneScan, load one of these modules using a module load command like:

                  module load FragGeneScan/1.31-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeBarcodes, load one of these modules using a module load command like:

                  module load FreeBarcodes/3.0.a5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeFEM, load one of these modules using a module load command like:

                  module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeImage, load one of these modules using a module load command like:

                  module load FreeImage/3.18.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeSurfer, load one of these modules using a module load command like:

                  module load FreeSurfer/7.3.2-centos8_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeXL, load one of these modules using a module load command like:

                  module load FreeXL/1.0.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FriBidi, load one of these modules using a module load command like:

                  module load FriBidi/1.0.12-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FuSeq, load one of these modules using a module load command like:

                  module load FuSeq/1.1.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FusionCatcher, load one of these modules using a module load command like:

                  module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GAPPadder, load one of these modules using a module load command like:

                  module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATB-Core, load one of these modules using a module load command like:

                  module load GATB-Core/1.4.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATE, load one of these modules using a module load command like:

                  module load GATE/9.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATK, load one of these modules using a module load command like:

                  module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GBprocesS, load one of these modules using a module load command like:

                  module load GBprocesS/4.0.0.post1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCC, load one of these modules using a module load command like:

                  module load GCC/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCCcore, load one of these modules using a module load command like:

                  module load GCCcore/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GConf, load one of these modules using a module load command like:

                  module load GConf/3.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDAL, load one of these modules using a module load command like:

                  module load GDAL/3.7.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDB, load one of these modules using a module load command like:

                  module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDCM, load one of these modules using a module load command like:

                  module load GDCM/3.0.21-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDGraph, load one of these modules using a module load command like:

                  module load GDGraph/1.56-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDRCopy, load one of these modules using a module load command like:

                  module load GDRCopy/2.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEGL, load one of these modules using a module load command like:

                  module load GEGL/0.4.30-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEOS, load one of these modules using a module load command like:

                  module load GEOS/3.12.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GFF3-toolkit, load one of these modules using a module load command like:

                  module load GFF3-toolkit/2.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GIMP, load one of these modules using a module load command like:

                  module load GIMP/2.10.24-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GL2PS, load one of these modules using a module load command like:

                  module load GL2PS/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLFW, load one of these modules using a module load command like:

                  module load GLFW/3.3.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLIMPSE, load one of these modules using a module load command like:

                  module load GLIMPSE/2.0.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLM, load one of these modules using a module load command like:

                  module load GLM/0.9.9.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLPK, load one of these modules using a module load command like:

                  module load GLPK/5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLib, load one of these modules using a module load command like:

                  module load GLib/2.77.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLibmm, load one of these modules using a module load command like:

                  module load GLibmm/2.66.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMAP-GSNAP, load one of these modules using a module load command like:

                  module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMP, load one of these modules using a module load command like:

                  module load GMP/6.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GOATOOLS, load one of these modules using a module load command like:

                  module load GOATOOLS/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GObject-Introspection, load one of these modules using a module load command like:

                  module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW-setups, load one of these modules using a module load command like:

                  module load GPAW-setups/0.9.20000\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW, load one of these modules using a module load command like:

                  module load GPAW/22.8.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPy, load one of these modules using a module load command like:

                  module load GPy/1.10.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyOpt, load one of these modules using a module load command like:

                  module load GPyOpt/1.2.6-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyTorch, load one of these modules using a module load command like:

                  module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASP-suite, load one of these modules using a module load command like:

                  module load GRASP-suite/2023-05-09-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASS, load one of these modules using a module load command like:

                  module load GRASS/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GROMACS, load one of these modules using a module load command like:

                  module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GSL, load one of these modules using a module load command like:

                  module load GSL/2.7-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-bad, load one of these modules using a module load command like:

                  module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-base, load one of these modules using a module load command like:

                  module load GST-plugins-base/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GStreamer, load one of these modules using a module load command like:

                  module load GStreamer/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTDB-Tk, load one of these modules using a module load command like:

                  module load GTDB-Tk/2.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK+, load one of these modules using a module load command like:

                  module load GTK+/3.24.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK2, load one of these modules using a module load command like:

                  module load GTK2/2.24.33-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK3, load one of these modules using a module load command like:

                  module load GTK3/3.24.37-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK4, load one of these modules using a module load command like:

                  module load GTK4/4.7.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTS, load one of these modules using a module load command like:

                  module load GTS/0.7.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GUSHR, load one of these modules using a module load command like:

                  module load GUSHR/2020-09-28-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GapFiller, load one of these modules using a module load command like:

                  module load GapFiller/2.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gaussian, load one of these modules using a module load command like:

                  module load Gaussian/g16_C.01-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gblocks, load one of these modules using a module load command like:

                  module load Gblocks/0.91b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gdk-Pixbuf, load one of these modules using a module load command like:

                  module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Geant4, load one of these modules using a module load command like:

                  module load Geant4/11.0.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GeneMark-ET, load one of these modules using a module load command like:

                  module load GeneMark-ET/4.71-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeThreader, load one of these modules using a module load command like:

                  module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeWorks, load one of these modules using a module load command like:

                  module load GenomeWorks/2021.02.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gerris, load one of these modules using a module load command like:

                  module load Gerris/20131206-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GetOrganelle, load one of these modules using a module load command like:

                  module load GetOrganelle/1.7.5.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GffCompare, load one of these modules using a module load command like:

                  module load GffCompare/0.12.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ghostscript, load one of these modules using a module load command like:

                  module load Ghostscript/10.01.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GimmeMotifs, load one of these modules using a module load command like:

                  module load GimmeMotifs/0.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Giotto-Suite, load one of these modules using a module load command like:

                  module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GitPython, load one of these modules using a module load command like:

                  module load GitPython/3.1.40-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlimmerHMM, load one of these modules using a module load command like:

                  module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlobalArrays, load one of these modules using a module load command like:

                  module load GlobalArrays/5.8-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GnuTLS, load one of these modules using a module load command like:

                  module load GnuTLS/3.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Go, load one of these modules using a module load command like:

                  module load Go/1.21.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gradle, load one of these modules using a module load command like:

                  module load Gradle/8.6-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap, load one of these modules using a module load command like:

                  module load GraphMap/0.5.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap2, load one of these modules using a module load command like:

                  module load GraphMap2/0.6.4-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphene, load one of these modules using a module load command like:

                  module load Graphene/1.10.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphicsMagick, load one of these modules using a module load command like:

                  module load GraphicsMagick/1.3.34-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphviz, load one of these modules using a module load command like:

                  module load Graphviz/8.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Greenlet, load one of these modules using a module load command like:

                  module load Greenlet/2.0.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GroIMP, load one of these modules using a module load command like:

                  module load GroIMP/1.5-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guile, load one of these modules using a module load command like:

                  module load Guile/3.0.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guppy, load one of these modules using a module load command like:

                  module load Guppy/6.5.7-gpu\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gurobi, load one of these modules using a module load command like:

                  module load Gurobi/11.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HAL, load one of these modules using a module load command like:

                  module load HAL/2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDBSCAN, load one of these modules using a module load command like:

                  module load HDBSCAN/0.8.29-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDDM, load one of these modules using a module load command like:

                  module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF, load one of these modules using a module load command like:

                  module load HDF/4.2.16-2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF5, load one of these modules using a module load command like:

                  module load HDF5/1.14.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HH-suite, load one of these modules using a module load command like:

                  module load HH-suite/3.3.0-gompic-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HISAT2, load one of these modules using a module load command like:

                  module load HISAT2/2.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER, load one of these modules using a module load command like:

                  module load HMMER/3.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER2, load one of these modules using a module load command like:

                  module load HMMER2/2.3.2-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HPL, load one of these modules using a module load command like:

                  module load HPL/2.3-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSeq, load one of these modules using a module load command like:

                  module load HTSeq/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSlib, load one of these modules using a module load command like:

                  module load HTSlib/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSplotter, load one of these modules using a module load command like:

                  module load HTSplotter/2.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hadoop, load one of these modules using a module load command like:

                  module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HarfBuzz, load one of these modules using a module load command like:

                  module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCExplorer, load one of these modules using a module load command like:

                  module load HiCExplorer/3.7.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCMatrix, load one of these modules using a module load command like:

                  module load HiCMatrix/17-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HighFive, load one of these modules using a module load command like:

                  module load HighFive/2.7.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Highway, load one of these modules using a module load command like:

                  module load Highway/1.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Horovod, load one of these modules using a module load command like:

                  module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HyPo, load one of these modules using a module load command like:

                  module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hybpiper, load one of these modules using a module load command like:

                  module load Hybpiper/2.1.6-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hydra, load one of these modules using a module load command like:

                  module load Hydra/1.1.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hyperopt, load one of these modules using a module load command like:

                  module load Hyperopt/0.2.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hypre, load one of these modules using a module load command like:

                  module load Hypre/2.25.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ICU, load one of these modules using a module load command like:

                  module load ICU/73.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IDBA-UD, load one of these modules using a module load command like:

                  module load IDBA-UD/1.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGMPlot, load one of these modules using a module load command like:

                  module load IGMPlot/2.4.2-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGV, load one of these modules using a module load command like:

                  module load IGV/2.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IOR, load one of these modules using a module load command like:

                  module load IOR/3.2.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IPython, load one of these modules using a module load command like:

                  module load IPython/8.14.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IQ-TREE, load one of these modules using a module load command like:

                  module load IQ-TREE/2.2.2.6-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IRkernel, load one of these modules using a module load command like:

                  module load IRkernel/1.2-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ISA-L, load one of these modules using a module load command like:

                  module load ISA-L/2.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ITK, load one of these modules using a module load command like:

                  module load ITK/5.2.1-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ImageMagick, load one of these modules using a module load command like:

                  module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Imath, load one of these modules using a module load command like:

                  module load Imath/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Inferelator, load one of these modules using a module load command like:

                  module load Inferelator/0.6.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Infernal, load one of these modules using a module load command like:

                  module load Infernal/1.1.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using InterProScan, load one of these modules using a module load command like:

                  module load InterProScan/5.62-94.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IonQuant, load one of these modules using a module load command like:

                  module load IonQuant/1.10.12-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoQuant, load one of these modules using a module load command like:

                  module load IsoQuant/3.3.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoSeq, load one of these modules using a module load command like:

                  module load IsoSeq/4.0.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JAGS, load one of these modules using a module load command like:

                  module load JAGS/4.3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JSON-GLib, load one of these modules using a module load command like:

                  module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jansson, load one of these modules using a module load command like:

                  module load Jansson/2.13.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JasPer, load one of these modules using a module load command like:

                  module load JasPer/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Java, load one of these modules using a module load command like:

                  module load Java/17.0.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jellyfish, load one of these modules using a module load command like:

                  module load Jellyfish/2.3.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JsonCpp, load one of these modules using a module load command like:

                  module load JsonCpp/1.9.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Judy, load one of these modules using a module load command like:

                  module load Judy/1.0.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Julia, load one of these modules using a module load command like:

                  module load Julia/1.9.3-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterHub, load one of these modules using a module load command like:

                  module load JupyterHub/4.0.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterLab, load one of these modules using a module load command like:

                  module load JupyterLab/4.0.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterNotebook, load one of these modules using a module load command like:

                  module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KMC, load one of these modules using a module load command like:

                  module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KaHIP, load one of these modules using a module load command like:

                  module load KaHIP/3.14-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kaleido, load one of these modules using a module load command like:

                  module load Kaleido/0.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kalign, load one of these modules using a module load command like:

                  module load Kalign/3.3.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kent_tools, load one of these modules using a module load command like:

                  module load Kent_tools/20190326-linux.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Keras, load one of these modules using a module load command like:

                  module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KerasTuner, load one of these modules using a module load command like:

                  module load KerasTuner/1.3.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken, load one of these modules using a module load command like:

                  module load Kraken/1.1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken2, load one of these modules using a module load command like:

                  module load Kraken2/2.1.2-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KrakenUniq, load one of these modules using a module load command like:

                  module load KrakenUniq/1.0.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KronaTools, load one of these modules using a module load command like:

                  module load KronaTools/2.8.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAME, load one of these modules using a module load command like:

                  module load LAME/3.100-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAMMPS, load one of these modules using a module load command like:

                  module load LAMMPS/patch_20Nov2019-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAST, load one of these modules using a module load command like:

                  module load LAST/1179-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LASTZ, load one of these modules using a module load command like:

                  module load LASTZ/1.04.22-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LDC, load one of these modules using a module load command like:

                  module load LDC/1.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LERC, load one of these modules using a module load command like:

                  module load LERC/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIANA+, load one of these modules using a module load command like:

                  module load LIANA+/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIBSVM, load one of these modules using a module load command like:

                  module load LIBSVM/3.30-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LLVM, load one of these modules using a module load command like:

                  module load LLVM/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMDB, load one of these modules using a module load command like:

                  module load LMDB/0.9.31-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMfit, load one of these modules using a module load command like:

                  module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPJmL, load one of these modules using a module load command like:

                  module load LPJmL/4.0.003-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPeg, load one of these modules using a module load command like:

                  module load LPeg/1.0.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LSD2, load one of these modules using a module load command like:

                  module load LSD2/2.4.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LUMPY, load one of these modules using a module load command like:

                  module load LUMPY/0.3.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LZO, load one of these modules using a module load command like:

                  module load LZO/2.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using L_RNA_scaffolder, load one of these modules using a module load command like:

                  module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lace, load one of these modules using a module load command like:

                  module load Lace/1.14.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LevelDB, load one of these modules using a module load command like:

                  module load LevelDB/1.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Levenshtein, load one of these modules using a module load command like:

                  module load Levenshtein/0.24.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LiBis, load one of these modules using a module load command like:

                  module load LiBis/20200428-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibLZF, load one of these modules using a module load command like:

                  module load LibLZF/3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibSoup, load one of these modules using a module load command like:

                  module load LibSoup/3.0.7-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibTIFF, load one of these modules using a module load command like:

                  module load LibTIFF/4.6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Libint, load one of these modules using a module load command like:

                  module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lighter, load one of these modules using a module load command like:

                  module load Lighter/1.1.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LittleCMS, load one of these modules using a module load command like:

                  module load LittleCMS/2.15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LncLOOM, load one of these modules using a module load command like:

                  module load LncLOOM/2.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LoRDEC, load one of these modules using a module load command like:

                  module load LoRDEC/0.9-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Longshot, load one of these modules using a module load command like:

                  module load Longshot/0.4.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LtrDetector, load one of these modules using a module load command like:

                  module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lua, load one of these modules using a module load command like:

                  module load Lua/5.4.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M1QN3, load one of these modules using a module load command like:

                  module load M1QN3/3.3-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M4, load one of these modules using a module load command like:

                  module load M4/1.4.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS2, load one of these modules using a module load command like:

                  module load MACS2/2.2.7.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS3, load one of these modules using a module load command like:

                  module load MACS3/3.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAFFT, load one of these modules using a module load command like:

                  module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAGeCK, load one of these modules using a module load command like:

                  module load MAGeCK/0.5.9.5-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MARS, load one of these modules using a module load command like:

                  module load MARS/20191101-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATIO, load one of these modules using a module load command like:

                  module load MATIO/1.5.17-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATLAB, load one of these modules using a module load command like:

                  module load MATLAB/2022b-r5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MBROLA, load one of these modules using a module load command like:

                  module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MCL, load one of these modules using a module load command like:

                  module load MCL/22.282-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDAnalysis, load one of these modules using a module load command like:

                  module load MDAnalysis/2.4.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDTraj, load one of these modules using a module load command like:

                  module load MDTraj/1.9.7-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGA, load one of these modules using a module load command like:

                  module load MEGA/11.0.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAHIT, load one of these modules using a module load command like:

                  module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAN, load one of these modules using a module load command like:

                  module load MEGAN/6.25.3-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEM, load one of these modules using a module load command like:

                  module load MEM/20191023-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEME, load one of these modules using a module load command like:

                  module load MEME/5.5.4-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MESS, load one of these modules using a module load command like:

                  module load MESS/0.1.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using METIS, load one of these modules using a module load command like:

                  module load METIS/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MIGRATE-N, load one of these modules using a module load command like:

                  module load MIGRATE-N/5.0.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MMseqs2, load one of these modules using a module load command like:

                  module load MMseqs2/14-7e284-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOABS, load one of these modules using a module load command like:

                  module load MOABS/1.3.9.6-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MONAI, load one of these modules using a module load command like:

                  module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOOSE, load one of these modules using a module load command like:

                  module load MOOSE/2022-06-10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPC, load one of these modules using a module load command like:

                  module load MPC/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPFR, load one of these modules using a module load command like:

                  module load MPFR/4.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MRtrix, load one of these modules using a module load command like:

                  module load MRtrix/3.0.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MSFragger, load one of these modules using a module load command like:

                  module load MSFragger/4.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMPS, load one of these modules using a module load command like:

                  module load MUMPS/5.6.1-foss-2023a-metis\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMmer, load one of these modules using a module load command like:

                  module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUSCLE, load one of these modules using a module load command like:

                  module load MUSCLE/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MXNet, load one of these modules using a module load command like:

                  module load MXNet/1.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaSuRCA, load one of these modules using a module load command like:

                  module load MaSuRCA/4.1.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mako, load one of these modules using a module load command like:

                  module load Mako/1.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB-connector-c, load one of these modules using a module load command like:

                  module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB, load one of these modules using a module load command like:

                  module load MariaDB/10.9.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mash, load one of these modules using a module load command like:

                  module load Mash/2.3-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Maven, load one of these modules using a module load command like:

                  module load Maven/3.6.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaxBin, load one of these modules using a module load command like:

                  module load MaxBin/2.2.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MedPy, load one of these modules using a module load command like:

                  module load MedPy/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Megalodon, load one of these modules using a module load command like:

                  module load Megalodon/2.3.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mercurial, load one of these modules using a module load command like:

                  module load Mercurial/6.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesa, load one of these modules using a module load command like:

                  module load Mesa/23.1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Meson, load one of these modules using a module load command like:

                  module load Meson/1.2.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesquite, load one of these modules using a module load command like:

                  module load Mesquite/2.3.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaBAT, load one of these modules using a module load command like:

                  module load MetaBAT/2.15-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaEuk, load one of these modules using a module load command like:

                  module load MetaEuk/6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaPhlAn, load one of these modules using a module load command like:

                  module load MetaPhlAn/4.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Metagenome-Atlas, load one of these modules using a module load command like:

                  module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MethylDackel, load one of these modules using a module load command like:

                  module load MethylDackel/0.5.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MiXCR, load one of these modules using a module load command like:

                  module load MiXCR/4.6.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MicrobeAnnotator, load one of these modules using a module load command like:

                  module load MicrobeAnnotator/2.0.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mikado, load one of these modules using a module load command like:

                  module load Mikado/2.3.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinCED, load one of these modules using a module load command like:

                  module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinPath, load one of these modules using a module load command like:

                  module load MinPath/1.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Miniconda3, load one of these modules using a module load command like:

                  module load Miniconda3/23.5.2-0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Minipolish, load one of these modules using a module load command like:

                  module load Minipolish/0.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MitoHiFi, load one of these modules using a module load command like:

                  module load MitoHiFi/3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ModelTest-NG, load one of these modules using a module load command like:

                  module load ModelTest-NG/0.1.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molden, load one of these modules using a module load command like:

                  module load Molden/6.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molekel, load one of these modules using a module load command like:

                  module load Molekel/5.4.0-Linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mono, load one of these modules using a module load command like:

                  module load Mono/6.8.0.105-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Monocle3, load one of these modules using a module load command like:

                  module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MrBayes, load one of these modules using a module load command like:

                  module load MrBayes/3.2.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MuJoCo, load one of these modules using a module load command like:

                  module load MuJoCo/2.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultiQC, load one of these modules using a module load command like:

                  module load MultiQC/1.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultilevelEstimators, load one of these modules using a module load command like:

                  module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Multiwfn, load one of these modules using a module load command like:

                  module load Multiwfn/3.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MyCC, load one of these modules using a module load command like:

                  module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Myokit, load one of these modules using a module load command like:

                  module load Myokit/1.32.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NAMD, load one of these modules using a module load command like:

                  module load NAMD/2.14-foss-2023a-mpi\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NASM, load one of these modules using a module load command like:

                  module load NASM/2.16.01-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCCL, load one of these modules using a module load command like:

                  module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCL, load one of these modules using a module load command like:

                  module load NCL/6.6.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCO, load one of these modules using a module load command like:

                  module load NCO/5.0.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NECI, load one of these modules using a module load command like:

                  module load NECI/20230620-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NEURON, load one of these modules using a module load command like:

                  module load NEURON/7.8.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGS, load one of these modules using a module load command like:

                  module load NGS/2.11.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGSpeciesID, load one of these modules using a module load command like:

                  module load NGSpeciesID/0.1.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLMpy, load one of these modules using a module load command like:

                  module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLTK, load one of these modules using a module load command like:

                  module load NLTK/3.8.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLopt, load one of these modules using a module load command like:

                  module load NLopt/2.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NOVOPlasty, load one of these modules using a module load command like:

                  module load NOVOPlasty/3.7-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSPR, load one of these modules using a module load command like:

                  module load NSPR/4.35-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSS, load one of these modules using a module load command like:

                  module load NSS/3.89.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NVHPC, load one of these modules using a module load command like:

                  module load NVHPC/21.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoCaller, load one of these modules using a module load command like:

                  module load NanoCaller/3.4.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoComp, load one of these modules using a module load command like:

                  module load NanoComp/1.13.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoFilt, load one of these modules using a module load command like:

                  module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoPlot, load one of these modules using a module load command like:

                  module load NanoPlot/1.33.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoStat, load one of these modules using a module load command like:

                  module load NanoStat/1.6.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanopolishComp, load one of these modules using a module load command like:

                  module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NetPyNE, load one of these modules using a module load command like:

                  module load NetPyNE/1.0.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NewHybrids, load one of these modules using a module load command like:

                  module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NextGenMap, load one of these modules using a module load command like:

                  module load NextGenMap/0.5.5-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nextflow, load one of these modules using a module load command like:

                  module load Nextflow/23.10.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NiBabel, load one of these modules using a module load command like:

                  module load NiBabel/4.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nim, load one of these modules using a module load command like:

                  module load Nim/1.6.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ninja, load one of these modules using a module load command like:

                  module load Ninja/1.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nipype, load one of these modules using a module load command like:

                  module load Nipype/1.8.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OBITools3, load one of these modules using a module load command like:

                  module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX-Runtime, load one of these modules using a module load command like:

                  module load ONNX-Runtime/1.16.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX, load one of these modules using a module load command like:

                  module load ONNX/1.15.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OPERA-MS, load one of these modules using a module load command like:

                  module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ORCA, load one of these modules using a module load command like:

                  module load ORCA/5.0.4-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

                  module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Oases, load one of these modules using a module load command like:

                  module load Oases/20180312-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Omnipose, load one of these modules using a module load command like:

                  module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenAI-Gym, load one of these modules using a module load command like:

                  module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBLAS, load one of these modules using a module load command like:

                  module load OpenBLAS/0.3.24-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBabel, load one of these modules using a module load command like:

                  module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCV, load one of these modules using a module load command like:

                  module load OpenCV/4.6.0-foss-2022a-contrib\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCoarrays, load one of these modules using a module load command like:

                  module load OpenCoarrays/2.8.0-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenEXR, load one of these modules using a module load command like:

                  module load OpenEXR/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM-Extend, load one of these modules using a module load command like:

                  module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM, load one of these modules using a module load command like:

                  module load OpenFOAM/v2206-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFace, load one of these modules using a module load command like:

                  module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFold, load one of these modules using a module load command like:

                  module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenForceField, load one of these modules using a module load command like:

                  module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenImageIO, load one of these modules using a module load command like:

                  module load OpenImageIO/2.0.12-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenJPEG, load one of these modules using a module load command like:

                  module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM-PLUMED, load one of these modules using a module load command like:

                  module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM, load one of these modules using a module load command like:

                  module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMMTools, load one of these modules using a module load command like:

                  module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMPI, load one of these modules using a module load command like:

                  module load OpenMPI/4.1.6-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMolcas, load one of these modules using a module load command like:

                  module load OpenMolcas/21.06-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPGM, load one of these modules using a module load command like:

                  module load OpenPGM/5.2.122-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPIV, load one of these modules using a module load command like:

                  module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSSL, load one of these modules using a module load command like:

                  module load OpenSSL/1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSees, load one of these modules using a module load command like:

                  module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide-Java, load one of these modules using a module load command like:

                  module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide, load one of these modules using a module load command like:

                  module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Optuna, load one of these modules using a module load command like:

                  module load Optuna/3.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OrthoFinder, load one of these modules using a module load command like:

                  module load OrthoFinder/2.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Osi, load one of these modules using a module load command like:

                  module load Osi/0.108.9-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PASA, load one of these modules using a module load command like:

                  module load PASA/2.5.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PBGZIP, load one of these modules using a module load command like:

                  module load PBGZIP/20160804-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE, load one of these modules using a module load command like:

                  module load PCRE/8.45-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE2, load one of these modules using a module load command like:

                  module load PCRE2/10.42-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PEAR, load one of these modules using a module load command like:

                  module load PEAR/0.9.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PETSc, load one of these modules using a module load command like:

                  module load PETSc/3.18.4-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PHYLIP, load one of these modules using a module load command like:

                  module load PHYLIP/3.697-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PICRUSt2, load one of these modules using a module load command like:

                  module load PICRUSt2/2.5.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLAMS, load one of these modules using a module load command like:

                  module load PLAMS/1.5.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLINK, load one of these modules using a module load command like:

                  module load PLINK/2.00a3.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLUMED, load one of these modules using a module load command like:

                  module load PLUMED/2.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLY, load one of these modules using a module load command like:

                  module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PMIx, load one of these modules using a module load command like:

                  module load PMIx/4.2.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POT, load one of these modules using a module load command like:

                  module load POT/0.9.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POV-Ray, load one of these modules using a module load command like:

                  module load POV-Ray/3.7.0.8-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PPanGGOLiN, load one of these modules using a module load command like:

                  module load PPanGGOLiN/1.1.136-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRANK, load one of these modules using a module load command like:

                  module load PRANK/170427-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRINSEQ, load one of these modules using a module load command like:

                  module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRISMS-PF, load one of these modules using a module load command like:

                  module load PRISMS-PF/2.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PROJ, load one of these modules using a module load command like:

                  module load PROJ/9.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pandoc, load one of these modules using a module load command like:

                  module load Pandoc/2.13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pango, load one of these modules using a module load command like:

                  module load Pango/1.50.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMETIS, load one of these modules using a module load command like:

                  module load ParMETIS/4.0.3-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMGridGen, load one of these modules using a module load command like:

                  module load ParMGridGen/1.0-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParaView, load one of these modules using a module load command like:

                  module load ParaView/5.11.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParmEd, load one of these modules using a module load command like:

                  module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Parsl, load one of these modules using a module load command like:

                  module load Parsl/2023.7.17-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PartitionFinder, load one of these modules using a module load command like:

                  module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

                  module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl, load one of these modules using a module load command like:

                  module load Perl/5.38.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Phenoflow, load one of these modules using a module load command like:

                  module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PhyloPhlAn, load one of these modules using a module load command like:

                  module load PhyloPhlAn/3.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow-SIMD, load one of these modules using a module load command like:

                  module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow, load one of these modules using a module load command like:

                  module load Pillow/10.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pilon, load one of these modules using a module load command like:

                  module load Pilon/1.23-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pint, load one of these modules using a module load command like:

                  module load Pint/0.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PnetCDF, load one of these modules using a module load command like:

                  module load PnetCDF/1.12.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Porechop, load one of these modules using a module load command like:

                  module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PostgreSQL, load one of these modules using a module load command like:

                  module load PostgreSQL/16.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Primer3, load one of these modules using a module load command like:

                  module load Primer3/2.5.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProBiS, load one of these modules using a module load command like:

                  module load ProBiS/20230403-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProtHint, load one of these modules using a module load command like:

                  module load ProtHint/2.6.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PsiCLASS, load one of these modules using a module load command like:

                  module load PsiCLASS/1.0.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PuLP, load one of these modules using a module load command like:

                  module load PuLP/2.8.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyBerny, load one of these modules using a module load command like:

                  module load PyBerny/0.6.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCairo, load one of these modules using a module load command like:

                  module load PyCairo/1.21.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCalib, load one of these modules using a module load command like:

                  module load PyCalib/20230531-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCheMPS2, load one of these modules using a module load command like:

                  module load PyCheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyFoam, load one of these modules using a module load command like:

                  module load PyFoam/2020.5-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGEOS, load one of these modules using a module load command like:

                  module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGObject, load one of these modules using a module load command like:

                  module load PyGObject/3.42.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyInstaller, load one of these modules using a module load command like:

                  module load PyInstaller/6.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyKeOps, load one of these modules using a module load command like:

                  module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC, load one of these modules using a module load command like:

                  module load PyMC/5.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC3, load one of these modules using a module load command like:

                  module load PyMC3/3.11.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMDE, load one of these modules using a module load command like:

                  module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMOL, load one of these modules using a module load command like:

                  module load PyMOL/2.5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOD, load one of these modules using a module load command like:

                  module load PyOD/0.8.7-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenCL, load one of these modules using a module load command like:

                  module load PyOpenCL/2023.1.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenGL, load one of these modules using a module load command like:

                  module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyPy, load one of these modules using a module load command like:

                  module load PyPy/7.3.12-3.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQt5, load one of these modules using a module load command like:

                  module load PyQt5/5.15.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQtGraph, load one of these modules using a module load command like:

                  module load PyQtGraph/0.13.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRETIS, load one of these modules using a module load command like:

                  module load PyRETIS/2.5.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRe, load one of these modules using a module load command like:

                  module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PySCF, load one of these modules using a module load command like:

                  module load PySCF/2.4.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyStan, load one of these modules using a module load command like:

                  module load PyStan/2.19.1.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTables, load one of these modules using a module load command like:

                  module load PyTables/3.8.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTensor, load one of these modules using a module load command like:

                  module load PyTensor/2.17.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Geometric, load one of these modules using a module load command like:

                  module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Ignite, load one of these modules using a module load command like:

                  module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Lightning, load one of these modules using a module load command like:

                  module load PyTorch-Lightning/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch, load one of these modules using a module load command like:

                  module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF, load one of these modules using a module load command like:

                  module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF3, load one of these modules using a module load command like:

                  module load PyVCF3/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWBGT, load one of these modules using a module load command like:

                  module load PyWBGT/1.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWavelets, load one of these modules using a module load command like:

                  module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyYAML, load one of these modules using a module load command like:

                  module load PyYAML/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyZMQ, load one of these modules using a module load command like:

                  module load PyZMQ/25.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PycURL, load one of these modules using a module load command like:

                  module load PycURL/7.45.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pychopper, load one of these modules using a module load command like:

                  module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pyomo, load one of these modules using a module load command like:

                  module load Pyomo/6.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pysam, load one of these modules using a module load command like:

                  module load Pysam/0.22.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python-bundle-PyPI, load one of these modules using a module load command like:

                  module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python, load one of these modules using a module load command like:

                  module load Python/3.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCA, load one of these modules using a module load command like:

                  module load QCA/2.3.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCxMS, load one of these modules using a module load command like:

                  module load QCxMS/5.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QD, load one of these modules using a module load command like:

                  module load QD/2.3.17-NVHPC-21.2-20160110\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QGIS, load one of these modules using a module load command like:

                  module load QGIS/3.28.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QIIME2, load one of these modules using a module load command like:

                  module load QIIME2/2023.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QScintilla, load one of these modules using a module load command like:

                  module load QScintilla/2.11.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QUAST, load one of these modules using a module load command like:

                  module load QUAST/5.2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qhull, load one of these modules using a module load command like:

                  module load Qhull/2020.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5, load one of these modules using a module load command like:

                  module load Qt5/5.15.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5Webkit, load one of these modules using a module load command like:

                  module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtKeychain, load one of these modules using a module load command like:

                  module load QtKeychain/0.13.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtPy, load one of these modules using a module load command like:

                  module load QtPy/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qtconsole, load one of these modules using a module load command like:

                  module load Qtconsole/5.4.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuPath, load one of these modules using a module load command like:

                  module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qualimap, load one of these modules using a module load command like:

                  module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuantumESPRESSO, load one of these modules using a module load command like:

                  module load QuantumESPRESSO/7.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuickFF, load one of these modules using a module load command like:

                  module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qwt, load one of these modules using a module load command like:

                  module load Qwt/6.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-INLA, load one of these modules using a module load command like:

                  module load R-INLA/24.01.18-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

                  module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-CRAN, load one of these modules using a module load command like:

                  module load R-bundle-CRAN/2023.12-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R, load one of these modules using a module load command like:

                  module load R/4.3.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R2jags, load one of these modules using a module load command like:

                  module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RASPA2, load one of these modules using a module load command like:

                  module load RASPA2/2.0.41-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML-NG, load one of these modules using a module load command like:

                  module load RAxML-NG/1.2.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML, load one of these modules using a module load command like:

                  module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDFlib, load one of these modules using a module load command like:

                  module load RDFlib/6.2.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDKit, load one of these modules using a module load command like:

                  module load RDKit/2022.09.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDP-Classifier, load one of these modules using a module load command like:

                  module load RDP-Classifier/2.13-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RE2, load one of these modules using a module load command like:

                  module load RE2/2023-08-01-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RLCard, load one of these modules using a module load command like:

                  module load RLCard/1.0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RMBlast, load one of these modules using a module load command like:

                  module load RMBlast/2.11.0-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RNA-Bloom, load one of these modules using a module load command like:

                  module load RNA-Bloom/2.0.1-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ROOT, load one of these modules using a module load command like:

                  module load ROOT/6.26.06-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSEM, load one of these modules using a module load command like:

                  module load RSEM/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSeQC, load one of these modules using a module load command like:

                  module load RSeQC/4.0.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RStudio-Server, load one of these modules using a module load command like:

                  module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RTG-Tools, load one of these modules using a module load command like:

                  module load RTG-Tools/3.12.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Racon, load one of these modules using a module load command like:

                  module load Racon/1.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RagTag, load one of these modules using a module load command like:

                  module load RagTag/2.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ragout, load one of these modules using a module load command like:

                  module load Ragout/2.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RapidJSON, load one of these modules using a module load command like:

                  module load RapidJSON/1.1.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Raven, load one of these modules using a module load command like:

                  module load Raven/1.8.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray-project, load one of these modules using a module load command like:

                  module load Ray-project/1.13.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray, load one of these modules using a module load command like:

                  module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ReFrame, load one of these modules using a module load command like:

                  module load ReFrame/4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Redis, load one of these modules using a module load command like:

                  module load Redis/7.0.8-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RegTools, load one of these modules using a module load command like:

                  module load RegTools/1.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RepeatMasker, load one of these modules using a module load command like:

                  module load RepeatMasker/4.1.2-p1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ResistanceGA, load one of these modules using a module load command like:

                  module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RevBayes, load one of these modules using a module load command like:

                  module load RevBayes/1.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rgurobi, load one of these modules using a module load command like:

                  module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RheoTool, load one of these modules using a module load command like:

                  module load RheoTool/5.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rmath, load one of these modules using a module load command like:

                  module load Rmath/4.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RnBeads, load one of these modules using a module load command like:

                  module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Roary, load one of these modules using a module load command like:

                  module load Roary/3.13.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ruby, load one of these modules using a module load command like:

                  module load Ruby/3.0.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rust, load one of these modules using a module load command like:

                  module load Rust/1.75.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SAMtools, load one of these modules using a module load command like:

                  module load SAMtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SBCL, load one of these modules using a module load command like:

                  module load SBCL/2.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCENIC, load one of these modules using a module load command like:

                  module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCGid, load one of these modules using a module load command like:

                  module load SCGid/0.9b0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCOTCH, load one of these modules using a module load command like:

                  module load SCOTCH/7.0.3-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCons, load one of these modules using a module load command like:

                  module load SCons/4.5.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCopeLoomR, load one of these modules using a module load command like:

                  module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDL2, load one of these modules using a module load command like:

                  module load SDL2/2.28.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDSL, load one of these modules using a module load command like:

                  module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEACells, load one of these modules using a module load command like:

                  module load SEACells/20230731-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SECAPR, load one of these modules using a module load command like:

                  module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SELFIES, load one of these modules using a module load command like:

                  module load SELFIES/2.1.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEPP, load one of these modules using a module load command like:

                  module load SEPP/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SHAP, load one of these modules using a module load command like:

                  module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO++, load one of these modules using a module load command like:

                  module load SISSO++/1.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO, load one of these modules using a module load command like:

                  module load SISSO/3.1-20220324-iimpi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SKESA, load one of these modules using a module load command like:

                  module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLATEC, load one of these modules using a module load command like:

                  module load SLATEC/4.1-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLEPc, load one of these modules using a module load command like:

                  module load SLEPc/3.18.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLiM, load one of these modules using a module load command like:

                  module load SLiM/3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMAP, load one of these modules using a module load command like:

                  module load SMAP/4.6.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMC++, load one of these modules using a module load command like:

                  module load SMC++/1.15.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMV, load one of these modules using a module load command like:

                  module load SMV/6.7.17-iccifort-2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA-python, load one of these modules using a module load command like:

                  module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA, load one of these modules using a module load command like:

                  module load SNAP-ESA/9.0.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP, load one of these modules using a module load command like:

                  module load SNAP/2.0.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

                  module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPAdes, load one of these modules using a module load command like:

                  module load SPAdes/3.15.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPM, load one of these modules using a module load command like:

                  module load SPM/12.5_r7771-MATLAB-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPOTPY, load one of these modules using a module load command like:

                  module load SPOTPY/1.5.14-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SQLite, load one of these modules using a module load command like:

                  module load SQLite/3.43.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRA-Toolkit, load one of these modules using a module load command like:

                  module load SRA-Toolkit/3.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRPRISM, load one of these modules using a module load command like:

                  module load SRPRISM/3.1.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRST2, load one of these modules using a module load command like:

                  module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSPACE_Basic, load one of these modules using a module load command like:

                  module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSW, load one of these modules using a module load command like:

                  module load SSW/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STACEY, load one of these modules using a module load command like:

                  module load STACEY/1.2.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STAR, load one of these modules using a module load command like:

                  module load STAR/2.7.11a-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STREAM, load one of these modules using a module load command like:

                  module load STREAM/5.10-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STRique, load one of these modules using a module load command like:

                  module load STRique/0.4.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUNDIALS, load one of these modules using a module load command like:

                  module load SUNDIALS/6.6.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUPPA, load one of these modules using a module load command like:

                  module load SUPPA/2.3-20231005-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SVIM, load one of these modules using a module load command like:

                  module load SVIM/2.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SWIG, load one of these modules using a module load command like:

                  module load SWIG/4.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sabre, load one of these modules using a module load command like:

                  module load Sabre/2013-09-28-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sailfish, load one of these modules using a module load command like:

                  module load Sailfish/0.10.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Salmon, load one of these modules using a module load command like:

                  module load Salmon/1.9.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sambamba, load one of these modules using a module load command like:

                  module load Sambamba/1.0.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Satsuma2, load one of these modules using a module load command like:

                  module load Satsuma2/20220304-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaFaCoS, load one of these modules using a module load command like:

                  module load ScaFaCoS/1.0.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaLAPACK, load one of these modules using a module load command like:

                  module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SciPy-bundle, load one of these modules using a module load command like:

                  module load SciPy-bundle/2023.11-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seaborn, load one of these modules using a module load command like:

                  module load Seaborn/0.13.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SemiBin, load one of these modules using a module load command like:

                  module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sentence-Transformers, load one of these modules using a module load command like:

                  module load Sentence-Transformers/2.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SentencePiece, load one of these modules using a module load command like:

                  module load SentencePiece/0.1.99-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqAn, load one of these modules using a module load command like:

                  module load SeqAn/2.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqKit, load one of these modules using a module load command like:

                  module load SeqKit/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqLib, load one of these modules using a module load command like:

                  module load SeqLib/1.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Serf, load one of these modules using a module load command like:

                  module load Serf/1.3.9-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seurat, load one of these modules using a module load command like:

                  module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratData, load one of these modules using a module load command like:

                  module load SeuratData/20210514-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratDisk, load one of these modules using a module load command like:

                  module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratWrappers, load one of these modules using a module load command like:

                  module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shapely, load one of these modules using a module load command like:

                  module load Shapely/2.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shasta, load one of these modules using a module load command like:

                  module load Shasta/0.8.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Short-Pair, load one of these modules using a module load command like:

                  module load Short-Pair/20170125-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SiNVICT, load one of these modules using a module load command like:

                  module load SiNVICT/1.0-20180817-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sibelia, load one of these modules using a module load command like:

                  module load Sibelia/3.0.7-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimNIBS, load one of these modules using a module load command like:

                  module load SimNIBS/3.2.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimPEG, load one of these modules using a module load command like:

                  module load SimPEG/0.18.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleElastix, load one of these modules using a module load command like:

                  module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleITK, load one of these modules using a module load command like:

                  module load SimpleITK/2.1.1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SlamDunk, load one of these modules using a module load command like:

                  module load SlamDunk/0.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sniffles, load one of these modules using a module load command like:

                  module load Sniffles/2.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SoX, load one of these modules using a module load command like:

                  module load SoX/14.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spark, load one of these modules using a module load command like:

                  module load Spark/3.5.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SpatialDE, load one of these modules using a module load command like:

                  module load SpatialDE/1.1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spyder, load one of these modules using a module load command like:

                  module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SqueezeMeta, load one of these modules using a module load command like:

                  module load SqueezeMeta/1.5.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Squidpy, load one of these modules using a module load command like:

                  module load Squidpy/1.2.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stacks, load one of these modules using a module load command like:

                  module load Stacks/2.53-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stata, load one of these modules using a module load command like:

                  module load Stata/15\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Statistics-R, load one of these modules using a module load command like:

                  module load Statistics-R/0.34-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using StringTie, load one of these modules using a module load command like:

                  module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure, load one of these modules using a module load command like:

                  module load Structure/2.3.4-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure_threader, load one of these modules using a module load command like:

                  module load Structure_threader/1.3.10-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuAVE-biomat, load one of these modules using a module load command like:

                  module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subread, load one of these modules using a module load command like:

                  module load Subread/2.0.3-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subversion, load one of these modules using a module load command like:

                  module load Subversion/1.14.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuiteSparse, load one of these modules using a module load command like:

                  module load SuiteSparse/7.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU, load one of these modules using a module load command like:

                  module load SuperLU/5.2.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU_DIST, load one of these modules using a module load command like:

                  module load SuperLU_DIST/8.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Szip, load one of these modules using a module load command like:

                  module load Szip/2.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TALON, load one of these modules using a module load command like:

                  module load TALON/5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TAMkin, load one of these modules using a module load command like:

                  module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TCLAP, load one of these modules using a module load command like:

                  module load TCLAP/1.2.4-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

                  module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TEtranscripts, load one of these modules using a module load command like:

                  module load TEtranscripts/2.2.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOBIAS, load one of these modules using a module load command like:

                  module load TOBIAS/0.12.12-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOPAS, load one of these modules using a module load command like:

                  module load TOPAS/3.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRF, load one of these modules using a module load command like:

                  module load TRF/4.09.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRUST4, load one of these modules using a module load command like:

                  module load TRUST4/1.0.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tcl, load one of these modules using a module load command like:

                  module load Tcl/8.6.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TensorFlow, load one of these modules using a module load command like:

                  module load TensorFlow/2.13.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Theano, load one of these modules using a module load command like:

                  module load Theano/1.1.2-intel-2021b-PyMC\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tk, load one of these modules using a module load command like:

                  module load Tk/8.6.13-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tkinter, load one of these modules using a module load command like:

                  module load Tkinter/3.11.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Togl, load one of these modules using a module load command like:

                  module load Togl/2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tombo, load one of these modules using a module load command like:

                  module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TopHat, load one of these modules using a module load command like:

                  module load TopHat/2.1.2-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TransDecoder, load one of these modules using a module load command like:

                  module load TransDecoder/5.5.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TranscriptClean, load one of these modules using a module load command like:

                  module load TranscriptClean/2.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Transformers, load one of these modules using a module load command like:

                  module load Transformers/4.30.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TreeMix, load one of these modules using a module load command like:

                  module load TreeMix/1.13-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trilinos, load one of these modules using a module load command like:

                  module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trim_Galore, load one of these modules using a module load command like:

                  module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trimmomatic, load one of these modules using a module load command like:

                  module load Trimmomatic/0.39-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trinity, load one of these modules using a module load command like:

                  module load Trinity/2.15.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Triton, load one of these modules using a module load command like:

                  module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trycycler, load one of these modules using a module load command like:

                  module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TurboVNC, load one of these modules using a module load command like:

                  module load TurboVNC/2.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCC, load one of these modules using a module load command like:

                  module load UCC/1.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCLUST, load one of these modules using a module load command like:

                  module load UCLUST/1.2.22q-i86linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX-CUDA, load one of these modules using a module load command like:

                  module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX, load one of these modules using a module load command like:

                  module load UCX/1.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UDUNITS, load one of these modules using a module load command like:

                  module load UDUNITS/2.2.28-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UFL, load one of these modules using a module load command like:

                  module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UMI-tools, load one of these modules using a module load command like:

                  module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UQTk, load one of these modules using a module load command like:

                  module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using USEARCH, load one of these modules using a module load command like:

                  module load USEARCH/11.0.667-i86linux32\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UnZip, load one of these modules using a module load command like:

                  module load UnZip/6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UniFrac, load one of these modules using a module load command like:

                  module load UniFrac/1.3.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unicycler, load one of these modules using a module load command like:

                  module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unidecode, load one of these modules using a module load command like:

                  module load Unidecode/1.3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VASP, load one of these modules using a module load command like:

                  module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VBZ-Compression, load one of these modules using a module load command like:

                  module load VBZ-Compression/1.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VCFtools, load one of these modules using a module load command like:

                  module load VCFtools/0.1.16-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VEP, load one of these modules using a module load command like:

                  module load VEP/107-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VESTA, load one of these modules using a module load command like:

                  module load VESTA/3.5.8-gtk3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMD, load one of these modules using a module load command like:

                  module load VMD/1.9.4a51-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMTK, load one of these modules using a module load command like:

                  module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSCode, load one of these modules using a module load command like:

                  module load VSCode/1.85.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSEARCH, load one of these modules using a module load command like:

                  module load VSEARCH/2.22.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTK, load one of these modules using a module load command like:

                  module load VTK/9.2.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTune, load one of these modules using a module load command like:

                  module load VTune/2019_update2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Vala, load one of these modules using a module load command like:

                  module load Vala/0.52.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Valgrind, load one of these modules using a module load command like:

                  module load Valgrind/3.20.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VarScan, load one of these modules using a module load command like:

                  module load VarScan/2.4.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Velvet, load one of these modules using a module load command like:

                  module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VirSorter2, load one of these modules using a module load command like:

                  module load VirSorter2/2.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VisPy, load one of these modules using a module load command like:

                  module load VisPy/0.12.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Voro++, load one of these modules using a module load command like:

                  module load Voro++/0.4.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WFA2, load one of these modules using a module load command like:

                  module load WFA2/2.3.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WHAM, load one of these modules using a module load command like:

                  module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WIEN2k, load one of these modules using a module load command like:

                  module load WIEN2k/21.1-intel-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WPS, load one of these modules using a module load command like:

                  module load WPS/4.1-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WRF, load one of these modules using a module load command like:

                  module load WRF/4.1.3-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wannier90, load one of these modules using a module load command like:

                  module load Wannier90/3.1.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wayland, load one of these modules using a module load command like:

                  module load Wayland/1.22.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Waylandpp, load one of these modules using a module load command like:

                  module load Waylandpp/1.0.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WebKitGTK+, load one of these modules using a module load command like:

                  module load WebKitGTK+/2.37.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WhatsHap, load one of these modules using a module load command like:

                  module load WhatsHap/1.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Winnowmap, load one of these modules using a module load command like:

                  module load Winnowmap/1.0-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WisecondorX, load one of these modules using a module load command like:

                  module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using X11, load one of these modules using a module load command like:

                  module load X11/20230603-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCFun, load one of these modules using a module load command like:

                  module load XCFun/2.1.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCrySDen, load one of these modules using a module load command like:

                  module load XCrySDen/1.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XGBoost, load one of these modules using a module load command like:

                  module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-Compile, load one of these modules using a module load command like:

                  module load XML-Compile/1.63-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-LibXML, load one of these modules using a module load command like:

                  module load XML-LibXML/2.0208-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XZ, load one of these modules using a module load command like:

                  module load XZ/5.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xerces-C++, load one of these modules using a module load command like:

                  module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XlsxWriter, load one of these modules using a module load command like:

                  module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xvfb, load one of these modules using a module load command like:

                  module load Xvfb/21.1.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YACS, load one of these modules using a module load command like:

                  module load YACS/0.1.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YANK, load one of these modules using a module load command like:

                  module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YAXT, load one of these modules using a module load command like:

                  module load YAXT/0.9.1-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yambo, load one of these modules using a module load command like:

                  module load Yambo/5.1.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yasm, load one of these modules using a module load command like:

                  module load Yasm/1.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Z3, load one of these modules using a module load command like:

                  module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zeo++, load one of these modules using a module load command like:

                  module load Zeo++/0.3-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ZeroMQ, load one of these modules using a module load command like:

                  module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zip, load one of these modules using a module load command like:

                  module load Zip/3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zopfli, load one of these modules using a module load command like:

                  module load Zopfli/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using adjustText, load one of these modules using a module load command like:

                  module load adjustText/0.7.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aiohttp, load one of these modules using a module load command like:

                  module load aiohttp/3.8.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alevin-fry, load one of these modules using a module load command like:

                  module load alevin-fry/0.4.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleCount, load one of these modules using a module load command like:

                  module load alleleCount/4.3.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleIntegrator, load one of these modules using a module load command like:

                  module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alsa-lib, load one of these modules using a module load command like:

                  module load alsa-lib/1.2.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anadama2, load one of these modules using a module load command like:

                  module load anadama2/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using angsd, load one of these modules using a module load command like:

                  module load angsd/0.940-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anndata, load one of these modules using a module load command like:

                  module load anndata/0.10.5.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ant, load one of these modules using a module load command like:

                  module load ant/1.10.12-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using antiSMASH, load one of these modules using a module load command like:

                  module load antiSMASH/6.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anvio, load one of these modules using a module load command like:

                  module load anvio/8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using any2fasta, load one of these modules using a module load command like:

                  module load any2fasta/0.4.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using apex, load one of these modules using a module load command like:

                  module load apex/20210420-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using archspec, load one of these modules using a module load command like:

                  module load archspec/0.1.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using argtable, load one of these modules using a module load command like:

                  module load argtable/2.13-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aria2, load one of these modules using a module load command like:

                  module load aria2/1.35.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arpack-ng, load one of these modules using a module load command like:

                  module load arpack-ng/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow-R, load one of these modules using a module load command like:

                  module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow, load one of these modules using a module load command like:

                  module load arrow/0.17.1-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-atk, load one of these modules using a module load command like:

                  module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-core, load one of these modules using a module load command like:

                  module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using atools, load one of these modules using a module load command like:

                  module load atools/1.5.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attr, load one of these modules using a module load command like:

                  module load attr/2.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict, load one of these modules using a module load command like:

                  module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict3, load one of these modules using a module load command like:

                  module load attrdict3/2.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using augur, load one of these modules using a module load command like:

                  module load augur/7.0.2-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using autopep8, load one of these modules using a module load command like:

                  module load autopep8/2.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using awscli, load one of these modules using a module load command like:

                  module load awscli/2.11.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using babl, load one of these modules using a module load command like:

                  module load babl/0.1.86-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bam-readcount, load one of these modules using a module load command like:

                  module load bam-readcount/0.8.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bamFilters, load one of these modules using a module load command like:

                  module load bamFilters/2022-06-30-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using barrnap, load one of these modules using a module load command like:

                  module load barrnap/0.9-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using basemap, load one of these modules using a module load command like:

                  module load basemap/1.3.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcbio-gff, load one of these modules using a module load command like:

                  module load bcbio-gff/0.7.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcgTree, load one of these modules using a module load command like:

                  module load bcgTree/1.2.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl-convert, load one of these modules using a module load command like:

                  module load bcl-convert/4.0.3-2el7.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl2fastq2, load one of these modules using a module load command like:

                  module load bcl2fastq2/2.20.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using beagle-lib, load one of these modules using a module load command like:

                  module load beagle-lib/4.0.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using binutils, load one of these modules using a module load command like:

                  module load binutils/2.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobakery-workflows, load one of these modules using a module load command like:

                  module load biobakery-workflows/3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobambam2, load one of these modules using a module load command like:

                  module load biobambam2/2.0.185-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biogeme, load one of these modules using a module load command like:

                  module load biogeme/3.2.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biom-format, load one of these modules using a module load command like:

                  module load biom-format/2.1.15-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bmtagger, load one of these modules using a module load command like:

                  module load bmtagger/3.101-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bokeh, load one of these modules using a module load command like:

                  module load bokeh/3.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using boto3, load one of these modules using a module load command like:

                  module load boto3/1.34.10-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bpp, load one of these modules using a module load command like:

                  module load bpp/4.4.0-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using btllib, load one of these modules using a module load command like:

                  module load btllib/1.7.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using build, load one of these modules using a module load command like:

                  module load build/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildenv, load one of these modules using a module load command like:

                  module load buildenv/default-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildingspy, load one of these modules using a module load command like:

                  module load buildingspy/4.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwa-meth, load one of these modules using a module load command like:

                  module load bwa-meth/0.2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwidget, load one of these modules using a module load command like:

                  module load bwidget/1.9.15-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bx-python, load one of these modules using a module load command like:

                  module load bx-python/0.10.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bzip2, load one of these modules using a module load command like:

                  module load bzip2/1.0.8-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using c-ares, load one of these modules using a module load command like:

                  module load c-ares/1.18.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cURL, load one of these modules using a module load command like:

                  module load cURL/8.3.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cairo, load one of these modules using a module load command like:

                  module load cairo/1.17.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using canu, load one of these modules using a module load command like:

                  module load canu/2.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using carputils, load one of these modules using a module load command like:

                  module load carputils/20210513-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ccache, load one of these modules using a module load command like:

                  module load ccache/4.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cctbx-base, load one of these modules using a module load command like:

                  module load cctbx-base/2023.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdbfasta, load one of these modules using a module load command like:

                  module load cdbfasta/0.99-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdo-bindings, load one of these modules using a module load command like:

                  module load cdo-bindings/1.5.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdsapi, load one of these modules using a module load command like:

                  module load cdsapi/0.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cell2location, load one of these modules using a module load command like:

                  module load cell2location/0.05-alpha-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cffi, load one of these modules using a module load command like:

                  module load cffi/1.15.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chemprop, load one of these modules using a module load command like:

                  module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chewBBACA, load one of these modules using a module load command like:

                  module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cicero, load one of these modules using a module load command like:

                  module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cimfomfa, load one of these modules using a module load command like:

                  module load cimfomfa/22.273-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-cli, load one of these modules using a module load command like:

                  module load code-cli/1.85.1-x64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-server, load one of these modules using a module load command like:

                  module load code-server/4.9.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using colossalai, load one of these modules using a module load command like:

                  module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using conan, load one of these modules using a module load command like:

                  module load conan/1.60.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using configurable-http-proxy, load one of these modules using a module load command like:

                  module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cooler, load one of these modules using a module load command like:

                  module load cooler/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using coverage, load one of these modules using a module load command like:

                  module load coverage/7.2.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cppy, load one of these modules using a module load command like:

                  module load cppy/1.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cpu_features, load one of these modules using a module load command like:

                  module load cpu_features/0.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryoDRGN, load one of these modules using a module load command like:

                  module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryptography, load one of these modules using a module load command like:

                  module load cryptography/41.0.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuDNN, load one of these modules using a module load command like:

                  module load cuDNN/8.9.2.26-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuTENSOR, load one of these modules using a module load command like:

                  module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cutadapt, load one of these modules using a module load command like:

                  module load cutadapt/4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuteSV, load one of these modules using a module load command like:

                  module load cuteSV/2.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cython-blis, load one of these modules using a module load command like:

                  module load cython-blis/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dask, load one of these modules using a module load command like:

                  module load dask/2023.12.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dbus-glib, load one of these modules using a module load command like:

                  module load dbus-glib/0.112-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dclone, load one of these modules using a module load command like:

                  module load dclone/2.3-0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deal.II, load one of these modules using a module load command like:

                  module load deal.II/9.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using decona, load one of these modules using a module load command like:

                  module load decona/0.1.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepTools, load one of these modules using a module load command like:

                  module load deepTools/3.5.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepdiff, load one of these modules using a module load command like:

                  module load deepdiff/6.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using detectron2, load one of these modules using a module load command like:

                  module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using devbio-napari, load one of these modules using a module load command like:

                  module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dicom2nifti, load one of these modules using a module load command like:

                  module load dicom2nifti/2.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dijitso, load one of these modules using a module load command like:

                  module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dill, load one of these modules using a module load command like:

                  module load dill/0.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dlib, load one of these modules using a module load command like:

                  module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-haiku, load one of these modules using a module load command like:

                  module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-tree, load one of these modules using a module load command like:

                  module load dm-tree/0.1.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dorado, load one of these modules using a module load command like:

                  module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using double-conversion, load one of these modules using a module load command like:

                  module load double-conversion/3.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using drmaa-python, load one of these modules using a module load command like:

                  module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dtcwt, load one of these modules using a module load command like:

                  module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using duplex-tools, load one of these modules using a module load command like:

                  module load duplex-tools/0.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dynesty, load one of these modules using a module load command like:

                  module load dynesty/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eSpeak-NG, load one of these modules using a module load command like:

                  module load eSpeak-NG/1.50-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ebGSEA, load one of these modules using a module load command like:

                  module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ecCodes, load one of these modules using a module load command like:

                  module load ecCodes/2.24.2-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using edlib, load one of these modules using a module load command like:

                  module load edlib/1.3.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eggnog-mapper, load one of these modules using a module load command like:

                  module load eggnog-mapper/2.1.10-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using einops, load one of these modules using a module load command like:

                  module load einops/0.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elfutils, load one of these modules using a module load command like:

                  module load elfutils/0.187-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elprep, load one of these modules using a module load command like:

                  module load elprep/5.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using enchant-2, load one of these modules using a module load command like:

                  module load enchant-2/2.3.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using epiScanpy, load one of these modules using a module load command like:

                  module load epiScanpy/0.4.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using exiv2, load one of these modules using a module load command like:

                  module load exiv2/0.27.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expat, load one of these modules using a module load command like:

                  module load expat/2.5.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expecttest, load one of these modules using a module load command like:

                  module load expecttest/0.1.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fasta-reader, load one of these modules using a module load command like:

                  module load fasta-reader/3.0.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastahack, load one of these modules using a module load command like:

                  module load fastahack/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastai, load one of these modules using a module load command like:

                  module load fastai/2.7.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastp, load one of these modules using a module load command like:

                  module load fastp/0.23.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fermi-lite, load one of these modules using a module load command like:

                  module load fermi-lite/20190320-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using festival, load one of these modules using a module load command like:

                  module load festival/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fetchMG, load one of these modules using a module load command like:

                  module load fetchMG/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ffnvcodec, load one of these modules using a module load command like:

                  module load ffnvcodec/12.0.16.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using file, load one of these modules using a module load command like:

                  module load file/5.43-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using filevercmp, load one of these modules using a module load command like:

                  module load filevercmp/20191210-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using finder, load one of these modules using a module load command like:

                  module load finder/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flair-NLP, load one of these modules using a module load command like:

                  module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers-python, load one of these modules using a module load command like:

                  module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers, load one of these modules using a module load command like:

                  module load flatbuffers/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flex, load one of these modules using a module load command like:

                  module load flex/2.6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flit, load one of these modules using a module load command like:

                  module load flit/3.9.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flowFDA, load one of these modules using a module load command like:

                  module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fmt, load one of these modules using a module load command like:

                  module load fmt/10.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fontconfig, load one of these modules using a module load command like:

                  module load fontconfig/2.14.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using foss, load one of these modules using a module load command like:

                  module load foss/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fosscuda, load one of these modules using a module load command like:

                  module load fosscuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freebayes, load one of these modules using a module load command like:

                  module load freebayes/1.3.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freeglut, load one of these modules using a module load command like:

                  module load freeglut/3.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype-py, load one of these modules using a module load command like:

                  module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype, load one of these modules using a module load command like:

                  module load freetype/2.13.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fsom, load one of these modules using a module load command like:

                  module load fsom/20151117-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using funannotate, load one of these modules using a module load command like:

                  module load funannotate/1.8.13-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2clib, load one of these modules using a module load command like:

                  module load g2clib/1.6.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2lib, load one of these modules using a module load command like:

                  module load g2lib/3.1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2log, load one of these modules using a module load command like:

                  module load g2log/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using garnett, load one of these modules using a module load command like:

                  module load garnett/0.1.20-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gawk, load one of these modules using a module load command like:

                  module load gawk/5.1.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gbasis, load one of these modules using a module load command like:

                  module load gbasis/20210904-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gc, load one of these modules using a module load command like:

                  module load gc/8.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcccuda, load one of these modules using a module load command like:

                  module load gcccuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcloud, load one of these modules using a module load command like:

                  module load gcloud/382.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcsfs, load one of these modules using a module load command like:

                  module load gcsfs/2023.12.2.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdbm, load one of these modules using a module load command like:

                  module load gdbm/1.18.1-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdc-client, load one of these modules using a module load command like:

                  module load gdc-client/1.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gengetopt, load one of these modules using a module load command like:

                  module load gengetopt/2.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genomepy, load one of these modules using a module load command like:

                  module load genomepy/0.15.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genozip, load one of these modules using a module load command like:

                  module load genozip/13.0.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gensim, load one of these modules using a module load command like:

                  module load gensim/4.2.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using geopandas, load one of these modules using a module load command like:

                  module load geopandas/0.12.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gettext, load one of these modules using a module load command like:

                  module load gettext/0.22-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gexiv2, load one of these modules using a module load command like:

                  module load gexiv2/0.12.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gfbf, load one of these modules using a module load command like:

                  module load gfbf/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffread, load one of these modules using a module load command like:

                  module load gffread/0.12.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffutils, load one of these modules using a module load command like:

                  module load gffutils/0.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gflags, load one of these modules using a module load command like:

                  module load gflags/2.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using giflib, load one of these modules using a module load command like:

                  module load giflib/5.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git-lfs, load one of these modules using a module load command like:

                  module load git-lfs/3.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git, load one of these modules using a module load command like:

                  module load git/2.42.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glew, load one of these modules using a module load command like:

                  module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glib-networking, load one of these modules using a module load command like:

                  module load glib-networking/2.72.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glibc, load one of these modules using a module load command like:

                  module load glibc/2.30-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glog, load one of these modules using a module load command like:

                  module load glog/0.6.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmpy2, load one of these modules using a module load command like:

                  module load gmpy2/2.1.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmsh, load one of these modules using a module load command like:

                  module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gnuplot, load one of these modules using a module load command like:

                  module load gnuplot/5.4.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using goalign, load one of these modules using a module load command like:

                  module load goalign/0.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gobff, load one of these modules using a module load command like:

                  module load gobff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gomkl, load one of these modules using a module load command like:

                  module load gomkl/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompi, load one of these modules using a module load command like:

                  module load gompi/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompic, load one of these modules using a module load command like:

                  module load gompic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using googletest, load one of these modules using a module load command like:

                  module load googletest/1.13.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gotree, load one of these modules using a module load command like:

                  module load gotree/0.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperf, load one of these modules using a module load command like:

                  module load gperf/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperftools, load one of these modules using a module load command like:

                  module load gperftools/2.14-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gpustat, load one of these modules using a module load command like:

                  module load gpustat/0.6.0-gcccuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphite2, load one of these modules using a module load command like:

                  module load graphite2/1.3.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphviz-python, load one of these modules using a module load command like:

                  module load graphviz-python/0.20.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using grid, load one of these modules using a module load command like:

                  module load grid/20220610-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using groff, load one of these modules using a module load command like:

                  module load groff/1.22.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gzip, load one of these modules using a module load command like:

                  module load gzip/1.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5netcdf, load one of these modules using a module load command like:

                  module load h5netcdf/1.2.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5py, load one of these modules using a module load command like:

                  module load h5py/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using harmony, load one of these modules using a module load command like:

                  module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hatchling, load one of these modules using a module load command like:

                  module load hatchling/1.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using help2man, load one of these modules using a module load command like:

                  module load help2man/1.49.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hierfstat, load one of these modules using a module load command like:

                  module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hifiasm, load one of these modules using a module load command like:

                  module load hifiasm/0.19.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hiredis, load one of these modules using a module load command like:

                  module load hiredis/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using histolab, load one of these modules using a module load command like:

                  module load histolab/0.4.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hmmlearn, load one of these modules using a module load command like:

                  module load hmmlearn/0.3.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using horton, load one of these modules using a module load command like:

                  module load horton/2.1.1-intel-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using how_are_we_stranded_here, load one of these modules using a module load command like:

                  module load how_are_we_stranded_here/1.0.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using humann, load one of these modules using a module load command like:

                  module load humann/3.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hunspell, load one of these modules using a module load command like:

                  module load hunspell/1.7.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hwloc, load one of these modules using a module load command like:

                  module load hwloc/2.9.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hyperopt, load one of these modules using a module load command like:

                  module load hyperopt/0.2.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hypothesis, load one of these modules using a module load command like:

                  module load hypothesis/6.90.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifort, load one of these modules using a module load command like:

                  module load iccifort/2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifortcuda, load one of these modules using a module load command like:

                  module load iccifortcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ichorCNA, load one of these modules using a module load command like:

                  module load ichorCNA/0.3.2-20191219-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using idemux, load one of these modules using a module load command like:

                  module load idemux/0.1.6-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igraph, load one of these modules using a module load command like:

                  module load igraph/0.10.10-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igvShiny, load one of these modules using a module load command like:

                  module load igvShiny/20240112-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iibff, load one of these modules using a module load command like:

                  module load iibff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpi, load one of these modules using a module load command like:

                  module load iimpi/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpic, load one of these modules using a module load command like:

                  module load iimpic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imagecodecs, load one of these modules using a module load command like:

                  module load imagecodecs/2022.9.26-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imageio, load one of these modules using a module load command like:

                  module load imageio/2.22.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imbalanced-learn, load one of these modules using a module load command like:

                  module load imbalanced-learn/0.10.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imgaug, load one of these modules using a module load command like:

                  module load imgaug/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl-FFTW, load one of these modules using a module load command like:

                  module load imkl-FFTW/2023.1.0-iimpi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl, load one of these modules using a module load command like:

                  module load imkl/2023.1.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using impi, load one of these modules using a module load command like:

                  module load impi/2021.9.0-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imutils, load one of these modules using a module load command like:

                  module load imutils/0.5.4-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inferCNV, load one of these modules using a module load command like:

                  module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using infercnvpy, load one of these modules using a module load command like:

                  module load infercnvpy/0.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inflection, load one of these modules using a module load command like:

                  module load inflection/1.3.5-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel-compilers, load one of these modules using a module load command like:

                  module load intel-compilers/2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel, load one of these modules using a module load command like:

                  module load intel/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intelcuda, load one of these modules using a module load command like:

                  module load intelcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree-python, load one of these modules using a module load command like:

                  module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree, load one of these modules using a module load command like:

                  module load intervaltree/0.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intltool, load one of these modules using a module load command like:

                  module load intltool/0.51.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iodata, load one of these modules using a module load command like:

                  module load iodata/1.0.0a2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iomkl, load one of these modules using a module load command like:

                  module load iomkl/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iompi, load one of these modules using a module load command like:

                  module load iompi/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using isoCirc, load one of these modules using a module load command like:

                  module load isoCirc/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jax, load one of these modules using a module load command like:

                  module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jbigkit, load one of these modules using a module load command like:

                  module load jbigkit/2.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jemalloc, load one of these modules using a module load command like:

                  module load jemalloc/5.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jobcli, load one of these modules using a module load command like:

                  module load jobcli/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using joypy, load one of these modules using a module load command like:

                  module load joypy/0.2.4-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using json-c, load one of these modules using a module load command like:

                  module load json-c/0.16-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

                  module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server-proxy, load one of these modules using a module load command like:

                  module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server, load one of these modules using a module load command like:

                  module load jupyter-server/2.7.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jxrlib, load one of these modules using a module load command like:

                  module load jxrlib/1.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kallisto, load one of these modules using a module load command like:

                  module load kallisto/0.48.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kb-python, load one of these modules using a module load command like:

                  module load kb-python/0.27.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kim-api, load one of these modules using a module load command like:

                  module load kim-api/2.3.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kineto, load one of these modules using a module load command like:

                  module load kineto/0.4.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kma, load one of these modules using a module load command like:

                  module load kma/1.2.22-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kneaddata, load one of these modules using a module load command like:

                  module load kneaddata/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using krbalancing, load one of these modules using a module load command like:

                  module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lancet, load one of these modules using a module load command like:

                  module load lancet/1.1.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lavaan, load one of these modules using a module load command like:

                  module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leafcutter, load one of these modules using a module load command like:

                  module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using legacy-job-wrappers, load one of these modules using a module load command like:

                  module load legacy-job-wrappers/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leidenalg, load one of these modules using a module load command like:

                  module load leidenalg/0.10.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lftp, load one of these modules using a module load command like:

                  module load lftp/4.9.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libBigWig, load one of these modules using a module load command like:

                  module load libBigWig/0.4.4-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libFLAME, load one of these modules using a module load command like:

                  module load libFLAME/5.2.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libGLU, load one of these modules using a module load command like:

                  module load libGLU/9.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libRmath, load one of these modules using a module load command like:

                  module load libRmath/4.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaec, load one of these modules using a module load command like:

                  module load libaec/1.0.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaio, load one of these modules using a module load command like:

                  module load libaio/0.3.113-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libarchive, load one of these modules using a module load command like:

                  module load libarchive/3.7.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libavif, load one of these modules using a module load command like:

                  module load libavif/0.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcdms, load one of these modules using a module load command like:

                  module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcerf, load one of these modules using a module load command like:

                  module load libcerf/2.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcint, load one of these modules using a module load command like:

                  module load libcint/5.5.0-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdap, load one of these modules using a module load command like:

                  module load libdap/3.20.7-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libde265, load one of these modules using a module load command like:

                  module load libde265/1.0.11-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdeflate, load one of these modules using a module load command like:

                  module load libdeflate/1.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrm, load one of these modules using a module load command like:

                  module load libdrm/2.4.115-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrs, load one of these modules using a module load command like:

                  module load libdrs/3.1.2-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libepoxy, load one of these modules using a module load command like:

                  module load libepoxy/1.5.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libev, load one of these modules using a module load command like:

                  module load libev/4.33-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libevent, load one of these modules using a module load command like:

                  module load libevent/2.1.12-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libfabric, load one of these modules using a module load command like:

                  module load libfabric/1.19.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libffi, load one of these modules using a module load command like:

                  module load libffi/3.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgcrypt, load one of these modules using a module load command like:

                  module load libgcrypt/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgd, load one of these modules using a module load command like:

                  module load libgd/2.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgeotiff, load one of these modules using a module load command like:

                  module load libgeotiff/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgit2, load one of these modules using a module load command like:

                  module load libgit2/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libglvnd, load one of these modules using a module load command like:

                  module load libglvnd/1.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpg-error, load one of these modules using a module load command like:

                  module load libgpg-error/1.42-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpuarray, load one of these modules using a module load command like:

                  module load libgpuarray/0.7.6-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libharu, load one of these modules using a module load command like:

                  module load libharu/2.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libheif, load one of these modules using a module load command like:

                  module load libheif/1.16.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libiconv, load one of these modules using a module load command like:

                  module load libiconv/1.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn, load one of these modules using a module load command like:

                  module load libidn/1.38-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn2, load one of these modules using a module load command like:

                  module load libidn2/2.3.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjpeg-turbo, load one of these modules using a module load command like:

                  module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjxl, load one of these modules using a module load command like:

                  module load libjxl/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libleidenalg, load one of these modules using a module load command like:

                  module load libleidenalg/0.11.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmad, load one of these modules using a module load command like:

                  module load libmad/0.15.1b-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmatheval, load one of these modules using a module load command like:

                  module load libmatheval/1.1.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmaus2, load one of these modules using a module load command like:

                  module load libmaus2/2.0.813-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmypaint, load one of these modules using a module load command like:

                  module load libmypaint/1.6.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libobjcryst, load one of these modules using a module load command like:

                  module load libobjcryst/2021.1.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libogg, load one of these modules using a module load command like:

                  module load libogg/1.3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libopus, load one of these modules using a module load command like:

                  module load libopus/1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpciaccess, load one of these modules using a module load command like:

                  module load libpciaccess/0.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpng, load one of these modules using a module load command like:

                  module load libpng/1.6.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpsl, load one of these modules using a module load command like:

                  module load libpsl/0.21.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libreadline, load one of these modules using a module load command like:

                  module load libreadline/8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librosa, load one of these modules using a module load command like:

                  module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librsvg, load one of these modules using a module load command like:

                  module load librsvg/2.51.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librttopo, load one of these modules using a module load command like:

                  module load librttopo/1.1.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsigc++, load one of these modules using a module load command like:

                  module load libsigc++/2.10.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsndfile, load one of these modules using a module load command like:

                  module load libsndfile/1.2.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsodium, load one of these modules using a module load command like:

                  module load libsodium/1.0.18-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialindex, load one of these modules using a module load command like:

                  module load libspatialindex/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialite, load one of these modules using a module load command like:

                  module load libspatialite/5.0.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtasn1, load one of these modules using a module load command like:

                  module load libtasn1/4.18.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtirpc, load one of these modules using a module load command like:

                  module load libtirpc/1.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtool, load one of these modules using a module load command like:

                  module load libtool/2.4.7-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunistring, load one of these modules using a module load command like:

                  module load libunistring/1.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunwind, load one of these modules using a module load command like:

                  module load libunwind/1.6.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvdwxc, load one of these modules using a module load command like:

                  module load libvdwxc/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvorbis, load one of these modules using a module load command like:

                  module load libvorbis/1.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvori, load one of these modules using a module load command like:

                  module load libvori/220621-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwebp, load one of these modules using a module load command like:

                  module load libwebp/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwpe, load one of these modules using a module load command like:

                  module load libwpe/1.13.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxc, load one of these modules using a module load command like:

                  module load libxc/6.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml++, load one of these modules using a module load command like:

                  module load libxml++/2.42.1-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml2, load one of these modules using a module load command like:

                  module load libxml2/2.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxslt, load one of these modules using a module load command like:

                  module load libxslt/1.1.38-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxsmm, load one of these modules using a module load command like:

                  module load libxsmm/1.17-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libyaml, load one of these modules using a module load command like:

                  module load libyaml/0.2.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libzip, load one of these modules using a module load command like:

                  module load libzip/1.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lifelines, load one of these modules using a module load command like:

                  module load lifelines/0.27.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using likwid, load one of these modules using a module load command like:

                  module load likwid/5.0.1-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lmoments3, load one of these modules using a module load command like:

                  module load lmoments3/1.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using longread_umi, load one of these modules using a module load command like:

                  module load longread_umi/0.3.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loomR, load one of these modules using a module load command like:

                  module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loompy, load one of these modules using a module load command like:

                  module load loompy/3.0.7-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using louvain, load one of these modules using a module load command like:

                  module load louvain/0.8.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lpsolve, load one of these modules using a module load command like:

                  module load lpsolve/5.5.2.11-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lxml, load one of these modules using a module load command like:

                  module load lxml/4.9.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lz4, load one of these modules using a module load command like:

                  module load lz4/1.9.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maeparser, load one of these modules using a module load command like:

                  module load maeparser/1.3.0-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using magma, load one of these modules using a module load command like:

                  module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mahotas, load one of these modules using a module load command like:

                  module load mahotas/1.4.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using make, load one of these modules using a module load command like:

                  module load make/4.4.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makedepend, load one of these modules using a module load command like:

                  module load makedepend/1.0.6-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makeinfo, load one of these modules using a module load command like:

                  module load makeinfo/7.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using manta, load one of these modules using a module load command like:

                  module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mapDamage, load one of these modules using a module load command like:

                  module load mapDamage/2.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using matplotlib, load one of these modules using a module load command like:

                  module load matplotlib/3.7.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maturin, load one of these modules using a module load command like:

                  module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mauveAligner, load one of these modules using a module load command like:

                  module load mauveAligner/4736-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maze, load one of these modules using a module load command like:

                  module load maze/20170124-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mcu, load one of these modules using a module load command like:

                  module load mcu/2021-04-06-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medImgProc, load one of these modules using a module load command like:

                  module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medaka, load one of these modules using a module load command like:

                  module load medaka/1.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshalyzer, load one of these modules using a module load command like:

                  module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshtool, load one of these modules using a module load command like:

                  module load meshtool/16-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meson-python, load one of these modules using a module load command like:

                  module load meson-python/0.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaWRAP, load one of these modules using a module load command like:

                  module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaerg, load one of these modules using a module load command like:

                  module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using methylpy, load one of these modules using a module load command like:

                  module load methylpy/1.2.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgen, load one of these modules using a module load command like:

                  module load mgen/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgltools, load one of these modules using a module load command like:

                  module load mgltools/1.5.7\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mhcnuggets, load one of these modules using a module load command like:

                  module load mhcnuggets/2.3-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using microctools, load one of these modules using a module load command like:

                  module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minibar, load one of these modules using a module load command like:

                  module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minimap2, load one of these modules using a module load command like:

                  module load minimap2/2.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minizip, load one of these modules using a module load command like:

                  module load minizip/1.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using misha, load one of these modules using a module load command like:

                  module load misha/4.0.10-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mkl-service, load one of these modules using a module load command like:

                  module load mkl-service/2.3.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mm-common, load one of these modules using a module load command like:

                  module load mm-common/1.0.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using molmod, load one of these modules using a module load command like:

                  module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mongolite, load one of these modules using a module load command like:

                  module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using monitor, load one of these modules using a module load command like:

                  module load monitor/1.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mosdepth, load one of these modules using a module load command like:

                  module load mosdepth/0.3.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using motionSegmentation, load one of these modules using a module load command like:

                  module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpath, load one of these modules using a module load command like:

                  module load mpath/1.1.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpi4py, load one of these modules using a module load command like:

                  module load mpi4py/3.1.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mrcfile, load one of these modules using a module load command like:

                  module load mrcfile/1.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using muParser, load one of these modules using a module load command like:

                  module load muParser/2.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mujoco-py, load one of these modules using a module load command like:

                  module load mujoco-py/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using multichoose, load one of these modules using a module load command like:

                  module load multichoose/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mygene, load one of these modules using a module load command like:

                  module load mygene/3.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mysqlclient, load one of these modules using a module load command like:

                  module load mysqlclient/2.1.1-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using n2v, load one of these modules using a module load command like:

                  module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanocompore, load one of these modules using a module load command like:

                  module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanofilt, load one of these modules using a module load command like:

                  module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanoget, load one of these modules using a module load command like:

                  module load nanoget/1.18.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanomath, load one of these modules using a module load command like:

                  module load nanomath/1.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanopolish, load one of these modules using a module load command like:

                  module load nanopolish/0.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using napari, load one of these modules using a module load command like:

                  module load napari/0.4.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncbi-vdb, load one of these modules using a module load command like:

                  module load ncbi-vdb/3.0.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncdf4, load one of these modules using a module load command like:

                  module load ncdf4/1.17-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncolor, load one of these modules using a module load command like:

                  module load ncolor/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncurses, load one of these modules using a module load command like:

                  module load ncurses/6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncview, load one of these modules using a module load command like:

                  module load ncview/2.1.7-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-C++4, load one of these modules using a module load command like:

                  module load netCDF-C++4/4.3.1-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-Fortran, load one of these modules using a module load command like:

                  module load netCDF-Fortran/4.6.0-iimpi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF, load one of these modules using a module load command like:

                  module load netCDF/4.9.2-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netcdf4-python, load one of these modules using a module load command like:

                  module load netcdf4-python/1.6.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nettle, load one of these modules using a module load command like:

                  module load nettle/3.9.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using networkx, load one of these modules using a module load command like:

                  module load networkx/3.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp2, load one of these modules using a module load command like:

                  module load nghttp2/1.48.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp3, load one of these modules using a module load command like:

                  module load nghttp3/0.6.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nglview, load one of these modules using a module load command like:

                  module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ngtcp2, load one of these modules using a module load command like:

                  module load ngtcp2/0.7.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nichenetr, load one of these modules using a module load command like:

                  module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nlohmann_json, load one of these modules using a module load command like:

                  module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nnU-Net, load one of these modules using a module load command like:

                  module load nnU-Net/1.7.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nodejs, load one of these modules using a module load command like:

                  module load nodejs/18.17.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using noise, load one of these modules using a module load command like:

                  module load noise/1.2.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nsync, load one of these modules using a module load command like:

                  module load nsync/1.26.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ntCard, load one of these modules using a module load command like:

                  module load ntCard/1.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using num2words, load one of these modules using a module load command like:

                  module load num2words/0.5.10-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numactl, load one of these modules using a module load command like:

                  module load numactl/2.0.16-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numba, load one of these modules using a module load command like:

                  module load numba/0.58.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numexpr, load one of these modules using a module load command like:

                  module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nvtop, load one of these modules using a module load command like:

                  module load nvtop/1.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olaFlow, load one of these modules using a module load command like:

                  module load olaFlow/20210820-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olego, load one of these modules using a module load command like:

                  module load olego/1.1.9-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using onedrive, load one of these modules using a module load command like:

                  module load onedrive/2.4.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ont-fast5-api, load one of these modules using a module load command like:

                  module load ont-fast5-api/4.1.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openCARP, load one of these modules using a module load command like:

                  module load openCARP/6.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openkim-models, load one of these modules using a module load command like:

                  module load openkim-models/20190725-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openpyxl, load one of these modules using a module load command like:

                  module load openpyxl/3.1.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openslide-python, load one of these modules using a module load command like:

                  module load openslide-python/1.2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using orca, load one of these modules using a module load command like:

                  module load orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p11-kit, load one of these modules using a module load command like:

                  module load p11-kit/0.24.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p4est, load one of these modules using a module load command like:

                  module load p4est/2.8-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p7zip, load one of these modules using a module load command like:

                  module load p7zip/17.03-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pIRS, load one of these modules using a module load command like:

                  module load pIRS/2.0.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using packmol, load one of these modules using a module load command like:

                  module load packmol/v20.2.2-iccifort-2020.1.217\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pagmo, load one of these modules using a module load command like:

                  module load pagmo/2.17.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pairtools, load one of these modules using a module load command like:

                  module load pairtools/0.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using panaroo, load one of these modules using a module load command like:

                  module load panaroo/1.2.8-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pandas, load one of these modules using a module load command like:

                  module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel-fastq-dump, load one of these modules using a module load command like:

                  module load parallel-fastq-dump/0.6.7-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel, load one of these modules using a module load command like:

                  module load parallel/20230722-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parasail, load one of these modules using a module load command like:

                  module load parasail/2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using patchelf, load one of these modules using a module load command like:

                  module load patchelf/0.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pauvre, load one of these modules using a module load command like:

                  module load pauvre/0.1924-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pblat, load one of these modules using a module load command like:

                  module load pblat/2.5.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pdsh, load one of these modules using a module load command like:

                  module load pdsh/2.34-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using peakdetect, load one of these modules using a module load command like:

                  module load peakdetect/1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using petsc4py, load one of these modules using a module load command like:

                  module load petsc4py/3.17.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pftoolsV3, load one of these modules using a module load command like:

                  module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonemizer, load one of these modules using a module load command like:

                  module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonopy, load one of these modules using a module load command like:

                  module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phototonic, load one of these modules using a module load command like:

                  module load phototonic/2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phyluce, load one of these modules using a module load command like:

                  module load phyluce/1.7.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using picard, load one of these modules using a module load command like:

                  module load picard/2.25.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pigz, load one of these modules using a module load command like:

                  module load pigz/2.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pixman, load one of these modules using a module load command like:

                  module load pixman/0.42.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkg-config, load one of these modules using a module load command like:

                  module load pkg-config/0.29.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconf, load one of these modules using a module load command like:

                  module load pkgconf/2.0.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconfig, load one of these modules using a module load command like:

                  module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plot1cell, load one of these modules using a module load command like:

                  module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly-orca, load one of these modules using a module load command like:

                  module load plotly-orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly.py, load one of these modules using a module load command like:

                  module load plotly.py/5.16.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pocl, load one of these modules using a module load command like:

                  module load pocl/4.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pod5-file-format, load one of these modules using a module load command like:

                  module load pod5-file-format/0.1.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poetry, load one of these modules using a module load command like:

                  module load poetry/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using polars, load one of these modules using a module load command like:

                  module load polars/0.15.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poppler, load one of these modules using a module load command like:

                  module load poppler/23.09.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using popscle, load one of these modules using a module load command like:

                  module load popscle/0.1-beta-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using porefoam, load one of these modules using a module load command like:

                  module load porefoam/2021-09-21-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using powerlaw, load one of these modules using a module load command like:

                  module load powerlaw/1.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pplacer, load one of these modules using a module load command like:

                  module load pplacer/1.1.alpha19\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using preseq, load one of these modules using a module load command like:

                  module load preseq/3.2.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using presto, load one of these modules using a module load command like:

                  module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pretty-yaml, load one of these modules using a module load command like:

                  module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prodigal, load one of these modules using a module load command like:

                  module load prodigal/2.6.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prokka, load one of these modules using a module load command like:

                  module load prokka/1.14.5-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf-python, load one of these modules using a module load command like:

                  module load protobuf-python/4.24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf, load one of these modules using a module load command like:

                  module load protobuf/24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psutil, load one of these modules using a module load command like:

                  module load psutil/5.9.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psycopg2, load one of these modules using a module load command like:

                  module load psycopg2/2.9.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pugixml, load one of these modules using a module load command like:

                  module load pugixml/1.12.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pullseq, load one of these modules using a module load command like:

                  module load pullseq/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using purge_dups, load one of these modules using a module load command like:

                  module load purge_dups/1.2.5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pv, load one of these modules using a module load command like:

                  module load pv/1.7.24-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py-cpuinfo, load one of these modules using a module load command like:

                  module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py3Dmol, load one of these modules using a module load command like:

                  module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyBigWig, load one of these modules using a module load command like:

                  module load pyBigWig/0.3.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyEGA3, load one of these modules using a module load command like:

                  module load pyEGA3/5.0.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyGenomeTracks, load one of these modules using a module load command like:

                  module load pyGenomeTracks/3.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pySCENIC, load one of these modules using a module load command like:

                  module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyWannier90, load one of these modules using a module load command like:

                  module load pyWannier90/2021-12-07-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybedtools, load one of these modules using a module load command like:

                  module load pybedtools/0.9.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybind11, load one of these modules using a module load command like:

                  module load pybind11/2.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycocotools, load one of these modules using a module load command like:

                  module load pycocotools/2.0.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycodestyle, load one of these modules using a module load command like:

                  module load pycodestyle/2.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydantic, load one of these modules using a module load command like:

                  module load pydantic/2.5.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydicom, load one of these modules using a module load command like:

                  module load pydicom/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydot, load one of these modules using a module load command like:

                  module load pydot/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfaidx, load one of these modules using a module load command like:

                  module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfasta, load one of these modules using a module load command like:

                  module load pyfasta/0.5.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygmo, load one of these modules using a module load command like:

                  module load pygmo/2.16.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygraphviz, load one of these modules using a module load command like:

                  module load pygraphviz/1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyiron, load one of these modules using a module load command like:

                  module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymatgen, load one of these modules using a module load command like:

                  module load pymatgen/2022.9.21-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymbar, load one of these modules using a module load command like:

                  module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymca, load one of these modules using a module load command like:

                  module load pymca/5.6.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyobjcryst, load one of these modules using a module load command like:

                  module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyodbc, load one of these modules using a module load command like:

                  module load pyodbc/4.0.39-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyparsing, load one of these modules using a module load command like:

                  module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyproj, load one of these modules using a module load command like:

                  module load pyproj/3.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-api, load one of these modules using a module load command like:

                  module load pyro-api/0.1.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-ppl, load one of these modules using a module load command like:

                  module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysamstats, load one of these modules using a module load command like:

                  module load pysamstats/1.1.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysndfx, load one of these modules using a module load command like:

                  module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyspoa, load one of these modules using a module load command like:

                  module load pyspoa/0.0.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-flakefinder, load one of these modules using a module load command like:

                  module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-rerunfailures, load one of these modules using a module load command like:

                  module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-shard, load one of these modules using a module load command like:

                  module load pytest-shard/0.1.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-xdist, load one of these modules using a module load command like:

                  module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest, load one of these modules using a module load command like:

                  module load pytest/7.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythermalcomfort, load one of these modules using a module load command like:

                  module load pythermalcomfort/2.8.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-Levenshtein, load one of these modules using a module load command like:

                  module load python-Levenshtein/0.12.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-igraph, load one of these modules using a module load command like:

                  module load python-igraph/0.11.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-irodsclient, load one of these modules using a module load command like:

                  module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-isal, load one of these modules using a module load command like:

                  module load python-isal/1.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-louvain, load one of these modules using a module load command like:

                  module load python-louvain/0.16-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-parasail, load one of these modules using a module load command like:

                  module load python-parasail/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-telegram-bot, load one of these modules using a module load command like:

                  module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-weka-wrapper3, load one of these modules using a module load command like:

                  module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythran, load one of these modules using a module load command like:

                  module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qcat, load one of these modules using a module load command like:

                  module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qnorm, load one of these modules using a module load command like:

                  module load qnorm/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rMATS-turbo, load one of these modules using a module load command like:

                  module load rMATS-turbo/4.1.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using radian, load one of these modules using a module load command like:

                  module load radian/0.6.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterio, load one of these modules using a module load command like:

                  module load rasterio/1.3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterstats, load one of these modules using a module load command like:

                  module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rclone, load one of these modules using a module load command like:

                  module load rclone/1.65.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using re2c, load one of these modules using a module load command like:

                  module load re2c/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using redis-py, load one of these modules using a module load command like:

                  module load redis-py/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using regionmask, load one of these modules using a module load command like:

                  module load regionmask/0.10.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using request, load one of these modules using a module load command like:

                  module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rethinking, load one of these modules using a module load command like:

                  module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgdal, load one of these modules using a module load command like:

                  module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgeos, load one of these modules using a module load command like:

                  module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rickflow, load one of these modules using a module load command like:

                  module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rioxarray, load one of these modules using a module load command like:

                  module load rioxarray/0.11.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rjags, load one of these modules using a module load command like:

                  module load rjags/4-13-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rmarkdown, load one of these modules using a module load command like:

                  module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rpy2, load one of these modules using a module load command like:

                  module load rpy2/3.5.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstanarm, load one of these modules using a module load command like:

                  module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstudio, load one of these modules using a module load command like:

                  module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruamel.yaml, load one of these modules using a module load command like:

                  module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruffus, load one of these modules using a module load command like:

                  module load ruffus/2.8.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using s3fs, load one of these modules using a module load command like:

                  module load s3fs/2023.12.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samblaster, load one of these modules using a module load command like:

                  module load samblaster/0.1.26-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samclip, load one of these modules using a module load command like:

                  module load samclip/0.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sansa, load one of these modules using a module load command like:

                  module load sansa/0.0.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sbt, load one of these modules using a module load command like:

                  module load sbt/1.3.13-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scArches, load one of these modules using a module load command like:

                  module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scCODA, load one of these modules using a module load command like:

                  module load scCODA/0.1.9-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scGeneFit, load one of these modules using a module load command like:

                  module load scGeneFit/1.0.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scHiCExplorer, load one of these modules using a module load command like:

                  module load scHiCExplorer/7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scPred, load one of these modules using a module load command like:

                  module load scPred/1.9.2-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scVelo, load one of these modules using a module load command like:

                  module load scVelo/0.2.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scanpy, load one of these modules using a module load command like:

                  module load scanpy/1.9.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sceasy, load one of these modules using a module load command like:

                  module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib-metrics, load one of these modules using a module load command like:

                  module load scib-metrics/0.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib, load one of these modules using a module load command like:

                  module load scib/1.1.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-bio, load one of these modules using a module load command like:

                  module load scikit-bio/0.5.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-build, load one of these modules using a module load command like:

                  module load scikit-build/0.17.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-extremes, load one of these modules using a module load command like:

                  module load scikit-extremes/2022.4.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-image, load one of these modules using a module load command like:

                  module load scikit-image/0.19.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-learn, load one of these modules using a module load command like:

                  module load scikit-learn/1.4.0-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-misc, load one of these modules using a module load command like:

                  module load scikit-misc/0.1.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-optimize, load one of these modules using a module load command like:

                  module load scikit-optimize/0.9.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scipy, load one of these modules using a module load command like:

                  module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scrublet, load one of these modules using a module load command like:

                  module load scrublet/0.2.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scvi-tools, load one of these modules using a module load command like:

                  module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segemehl, load one of these modules using a module load command like:

                  module load segemehl/0.3.4-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segmentation-models, load one of these modules using a module load command like:

                  module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using semla, load one of these modules using a module load command like:

                  module load semla/1.1.6-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using seqtk, load one of these modules using a module load command like:

                  module load seqtk/1.4-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools-rust, load one of these modules using a module load command like:

                  module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools, load one of these modules using a module load command like:

                  module load setuptools/64.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sf, load one of these modules using a module load command like:

                  module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using shovill, load one of these modules using a module load command like:

                  module load shovill/1.1.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silhouetteRank, load one of these modules using a module load command like:

                  module load silhouetteRank/1.0.5.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silx, load one of these modules using a module load command like:

                  module load silx/0.14.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slepc4py, load one of these modules using a module load command like:

                  module load slepc4py/3.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slow5tools, load one of these modules using a module load command like:

                  module load slow5tools/0.4.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slurm-drmaa, load one of these modules using a module load command like:

                  module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smfishHmrf, load one of these modules using a module load command like:

                  module load smfishHmrf/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smithwaterman, load one of these modules using a module load command like:

                  module load smithwaterman/20160702-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smooth-topk, load one of these modules using a module load command like:

                  module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snakemake, load one of these modules using a module load command like:

                  module load snakemake/8.4.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snappy, load one of these modules using a module load command like:

                  module load snappy/1.1.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snippy, load one of these modules using a module load command like:

                  module load snippy/4.6.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snp-sites, load one of these modules using a module load command like:

                  module load snp-sites/2.5.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snpEff, load one of these modules using a module load command like:

                  module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using solo, load one of these modules using a module load command like:

                  module load solo/1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sonic, load one of these modules using a module load command like:

                  module load sonic/20180202-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaCy, load one of these modules using a module load command like:

                  module load spaCy/3.4.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaln, load one of these modules using a module load command like:

                  module load spaln/2.4.13f-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparse-neighbors-search, load one of these modules using a module load command like:

                  module load sparse-neighbors-search/0.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparsehash, load one of these modules using a module load command like:

                  module load sparsehash/2.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spatialreg, load one of these modules using a module load command like:

                  module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using speech_tools, load one of these modules using a module load command like:

                  module load speech_tools/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spglib-python, load one of these modules using a module load command like:

                  module load spglib-python/2.0.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spoa, load one of these modules using a module load command like:

                  module load spoa/4.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stardist, load one of these modules using a module load command like:

                  module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stars, load one of these modules using a module load command like:

                  module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using statsmodels, load one of these modules using a module load command like:

                  module load statsmodels/0.14.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using suave, load one of these modules using a module load command like:

                  module load suave/20160529-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using supernova, load one of these modules using a module load command like:

                  module load supernova/2.0.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using swissknife, load one of these modules using a module load command like:

                  module load swissknife/1.80-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sympy, load one of these modules using a module load command like:

                  module load sympy/1.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synapseclient, load one of these modules using a module load command like:

                  module load synapseclient/3.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synthcity, load one of these modules using a module load command like:

                  module load synthcity/0.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tMAE, load one of these modules using a module load command like:

                  module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tabixpp, load one of these modules using a module load command like:

                  module load tabixpp/1.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using task-spooler, load one of these modules using a module load command like:

                  module load task-spooler/1.0.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using taxator-tk, load one of these modules using a module load command like:

                  module load taxator-tk/1.3.3-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbb, load one of these modules using a module load command like:

                  module load tbb/2021.5.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbl2asn, load one of these modules using a module load command like:

                  module load tbl2asn/20220427-linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tcsh, load one of these modules using a module load command like:

                  module load tcsh/6.24.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboard, load one of these modules using a module load command like:

                  module load tensorboard/2.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboardX, load one of these modules using a module load command like:

                  module load tensorboardX/2.6.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorflow-probability, load one of these modules using a module load command like:

                  module load tensorflow-probability/0.19.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texinfo, load one of these modules using a module load command like:

                  module load texinfo/6.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texlive, load one of these modules using a module load command like:

                  module load texlive/20230313-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tidymodels, load one of these modules using a module load command like:

                  module load tidymodels/1.1.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using time, load one of these modules using a module load command like:

                  module load time/1.9-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using timm, load one of these modules using a module load command like:

                  module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tmux, load one of these modules using a module load command like:

                  module load tmux/3.2a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tokenizers, load one of these modules using a module load command like:

                  module load tokenizers/0.13.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchaudio, load one of these modules using a module load command like:

                  module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchtext, load one of these modules using a module load command like:

                  module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvf, load one of these modules using a module load command like:

                  module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvision, load one of these modules using a module load command like:

                  module load torchvision/0.14.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tornado, load one of these modules using a module load command like:

                  module load tornado/6.3.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tqdm, load one of these modules using a module load command like:

                  module load tqdm/4.66.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using treatSens, load one of these modules using a module load command like:

                  module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using trimAl, load one of these modules using a module load command like:

                  module load trimAl/1.4.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tsne, load one of these modules using a module load command like:

                  module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using typing-extensions, load one of these modules using a module load command like:

                  module load typing-extensions/4.9.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umap-learn, load one of these modules using a module load command like:

                  module load umap-learn/0.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umi4cPackage, load one of these modules using a module load command like:

                  module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainties, load one of these modules using a module load command like:

                  module load uncertainties/3.1.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainty-calibration, load one of these modules using a module load command like:

                  module load uncertainty-calibration/0.0.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unimap, load one of these modules using a module load command like:

                  module load unimap/0.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unixODBC, load one of these modules using a module load command like:

                  module load unixODBC/2.3.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using utf8proc, load one of these modules using a module load command like:

                  module load utf8proc/2.8.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using util-linux, load one of these modules using a module load command like:

                  module load util-linux/2.39-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vConTACT2, load one of these modules using a module load command like:

                  module load vConTACT2/0.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vaeda, load one of these modules using a module load command like:

                  module load vaeda/0.0.30-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vbz_compression, load one of these modules using a module load command like:

                  module load vbz_compression/1.0.1-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vcflib, load one of these modules using a module load command like:

                  module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using velocyto, load one of these modules using a module load command like:

                  module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using virtualenv, load one of these modules using a module load command like:

                  module load virtualenv/20.24.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vispr, load one of these modules using a module load command like:

                  module load vispr/0.4.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessce-python, load one of these modules using a module load command like:

                  module load vitessce-python/20230222-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessceR, load one of these modules using a module load command like:

                  module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vsc-mympirun, load one of these modules using a module load command like:

                  module load vsc-mympirun/5.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vt, load one of these modules using a module load command like:

                  module load vt/0.57721-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wandb, load one of these modules using a module load command like:

                  module load wandb/0.13.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using waves2Foam, load one of these modules using a module load command like:

                  module load waves2Foam/20200703-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wget, load one of these modules using a module load command like:

                  module load wget/1.21.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wgsim, load one of these modules using a module load command like:

                  module load wgsim/20111017-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using worker, load one of these modules using a module load command like:

                  module load worker/1.6.13-iimpi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wpebackend-fdo, load one of these modules using a module load command like:

                  module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrapt, load one of these modules using a module load command like:

                  module load wrapt/1.15.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrf-python, load one of these modules using a module load command like:

                  module load wrf-python/1.3.4.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wtdbg2, load one of these modules using a module load command like:

                  module load wtdbg2/2.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxPython, load one of these modules using a module load command like:

                  module load wxPython/4.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxWidgets, load one of these modules using a module load command like:

                  module load wxWidgets/3.2.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x264, load one of these modules using a module load command like:

                  module load x264/20230226-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x265, load one of these modules using a module load command like:

                  module load x265/3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xESMF, load one of these modules using a module load command like:

                  module load xESMF/0.3.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xarray, load one of these modules using a module load command like:

                  module load xarray/2023.9.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xorg-macros, load one of these modules using a module load command like:

                  module load xorg-macros/1.20.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xprop, load one of these modules using a module load command like:

                  module load xprop/1.2.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xproto, load one of these modules using a module load command like:

                  module load xproto/7.0.31-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xtb, load one of these modules using a module load command like:

                  module load xtb/6.6.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xxd, load one of these modules using a module load command like:

                  module load xxd/9.0.2112-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaff, load one of these modules using a module load command like:

                  module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaml-cpp, load one of these modules using a module load command like:

                  module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zUMIs, load one of these modules using a module load command like:

                  module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zarr, load one of these modules using a module load command like:

                  module load zarr/2.16.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zfp, load one of these modules using a module load command like:

                  module load zfp/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib-ng, load one of these modules using a module load command like:

                  module load zlib-ng/2.0.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib, load one of these modules using a module load command like:

                  module load zlib/1.2.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zstd, load one of these modules using a module load command like:

                  module load zstd/1.5.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

                  Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

                  "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
                  • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

                  • Includes researchers from all VSC partners.

                  • Usage is free of charge.

                  • Use your account credentials at your affiliated university to request a VSC-id and connect.

                  • See Getting a HPC Account.

                  "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
                  • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

                  • HPC-UGent promotes using the Tier1 services of the VSC.

                  • HPC-UGent can act as a liason.

                  "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
                  • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

                  • Same conditions apply, free of charge for all Flemish university associations.

                  • Use your university account credentials to request a VSC-id and connect.

                  "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

                  Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

                  "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
                  • VSC has a dedicated service geared towards industry.

                  • HPC-UGent can act as a liason to the VSC services.

                  "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
                  • Interested in collaborating in supercomputing with a UGent research group?

                  • We can help you look for a collaborative partner. Contact hpc@ugent.be.

                  "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
                  $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

                  $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

                  As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

                  "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

                  (more info soon)

                  "}]} \ No newline at end of file diff --git a/HPC/Gent/Windows/sitemap.xml.gz b/HPC/Gent/Windows/sitemap.xml.gz index 1d20953bc28..5f0557a4e1f 100644 Binary files a/HPC/Gent/Windows/sitemap.xml.gz and b/HPC/Gent/Windows/sitemap.xml.gz differ diff --git a/HPC/Gent/Windows/useful_linux_commands/index.html b/HPC/Gent/Windows/useful_linux_commands/index.html index 2e0d6cf0950..8e0245d5289 100644 --- a/HPC/Gent/Windows/useful_linux_commands/index.html +++ b/HPC/Gent/Windows/useful_linux_commands/index.html @@ -1496,7 +1496,7 @@

                  How to get started with shell scr
                  nano foo
                   

                  or use the following commands:

                  -
                  echo "echo Hello! This is my hostname:" > foo
                  +
                  echo "echo 'Hello! This is my hostname:'" > foo
                   echo hostname >> foo
                   

                  The easiest ways to run a script is by starting the interpreter and pass @@ -1521,7 +1521,9 @@

                  How to get started with shell scr /bin/bash

                  We edit our script and change it with this information:

                  -
                  #!/bin/bash echo \"Hello! This is my hostname:\" hostname
                  +
                  #!/bin/bash
                  +echo "Hello! This is my hostname:"
                  +hostname
                   

                  Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Gent/macOS/search/search_index.json b/HPC/Gent/macOS/search/search_index.json index 8146efd56f2..26b6ed9f34f 100644 --- a/HPC/Gent/macOS/search/search_index.json +++ b/HPC/Gent/macOS/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

                  Use the menu on the left to navigate, or use the search box on the top right.

                  You are viewing documentation intended for people using macOS.

                  Use the OS dropdown in the top bar to switch to a different operating system.

                  Quick links

                  • Getting Started | Getting Access
                  • Recording of HPC-UGent intro
                  • Linux Tutorial
                  • Hardware overview
                  • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
                  • FAQ | Troubleshooting | Best practices | Known issues

                  If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

                  If you still have any questions, you can contact the HPC-UGent team.

                  "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

                  New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

                  If you want to use software that's not yet installed on the HPC, send us a software installation request.

                  Overview of HPC-UGent Tier-2 infrastructure

                  "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

                  An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

                  See also: Running batch jobs.

                  "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

                  When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

                  Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

                  Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

                  If the package or library you want is not available, send us a software installation request.

                  "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

                  Modules each come with a suffix that describes the toolchain used to install them.

                  Examples:

                  • AlphaFold/2.2.2-foss-2021a

                  • tqdm/4.61.2-GCCcore-10.3.0

                  • Python/3.9.5-GCCcore-10.3.0

                  • matplotlib/3.4.2-foss-2021a

                  Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

                  The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

                  You can use module avail [search_text] to see which versions on which toolchains are available to use.

                  If you need something that's not available yet, you can request it through a software installation request.

                  It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

                  "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

                  When incompatible modules are loaded, you might encounter an error like this:

                  Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

                  You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

                  Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

                  An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

                  See also: How do I choose the job modules?

                  "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

                  The 72 hour walltime limit will not be extended. However, you can work around this barrier:

                  • Check that all available resources are being used. See also:
                    • How many cores/nodes should I request?.
                    • My job is slow.
                    • My job isn't using any GPUs.
                  • Use a faster cluster.
                  • Divide the job into more parallel processes.
                  • Divide the job into shorter processes, which you can submit as separate jobs.
                  • Use the built-in checkpointing of your software.
                  "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

                  Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

                  When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

                  Try requesting a bit more memory than your proportional share, and see if that solves the issue.

                  See also: Specifying memory requirements.

                  "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

                  When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

                  It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

                  See also: Running interactive jobs.

                  "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

                  Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

                  Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

                  See also: HPC-UGent GPU clusters.

                  "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

                  There are a few possible causes why a job can perform worse than expected.

                  Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

                  Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

                  Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

                  "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

                  Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

                  To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

                  See also: Multi core jobs/Parallel Computing and Mympirun.

                  "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

                  For example, we have a simple script (./hello.sh):

                  #!/bin/bash \necho \"hello world\"\n

                  And we run it like mympirun ./hello.sh --output output.txt.

                  To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

                  mympirun --output output.txt ./hello.sh\n
                  "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

                  See the explanation about how jobs get prioritized in When will my job start.

                  "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

                  When trying to create files, errors like this can occur:

                  No space left on device\n

                  The error \"No space left on device\" can mean two different things:

                  • all available storage quota on the file system in question has been used;
                  • the inode limit has been reached on that file system.

                  An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

                  Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

                  If the problem persists, feel free to contact support.

                  "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

                  NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

                  See https://helpdesk.ugent.be/account/en/regels.php.

                  If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

                  "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

                  Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

                  $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

                  For more information about chmod or setfacl, see Linux tutorial.

                  "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

                  Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

                  "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

                  Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

                  If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

                  "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

                  On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

                  MacOS & Linux (on Windows, only the second part is shown):

                  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

                  Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

                  "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

                  A Virtual Organisation consists of a number of members and moderators. A moderator can:

                  • Manage the VO members (but can't access/remove their data on the system).

                  • See how much storage each member has used, and set limits per member.

                  • Request additional storage for the VO.

                  One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

                  See also: Virtual Organisations.

                  "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

                  After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

                  See also: Your UGent home drive and shares.

                  "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

                  Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

                  du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

                  The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

                  The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

                  "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

                  By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

                  You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

                  "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

                  When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

                  sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

                  A lot of tasks can be performed without sudo, including installing software in your own account.

                  Installing software

                  • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
                  • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
                  "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

                  Who can I contact?

                  • General questions regarding HPC-UGent and VSC: hpc@ugent.be

                  • HPC-UGent Tier-2: hpc@ugent.be

                  • VSC Tier-1 compute: compute@vscentrum.be

                  • VSC Tier-1 cloud: cloud@vscentrum.be

                  "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

                  Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

                  "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

                  The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

                  "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

                  Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

                  module load hod\n
                  "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

                  The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

                  As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

                  For example, this will work as expected:

                  $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

                  Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

                  "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

                  The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

                  $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

                  By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

                  If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

                  Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

                  "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

                  After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

                  These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

                  You should occasionally clean this up using hod clean:

                  $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
                  Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

                  "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

                  If you have any questions, or are experiencing problems using HOD, you have a couple of options:

                  • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

                  • Contact the HPC-UGent team via hpc@ugent.be

                  • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

                  "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

                  Note

                  To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

                  Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

                  "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

                  The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

                  Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

                  Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

                  "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

                  Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

                  To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

                  $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

                  After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

                  To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

                  First, we copy the magicsquare.m example that comes with MATLAB to example.m:

                  cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

                  To compile a MATLAB program, use mcc -mv:

                  mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
                  "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

                  To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

                  It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

                  For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

                  "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

                  If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

                  export _JAVA_OPTIONS=\"-Xmx64M\"\n

                  The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

                  Another possible issue is that the heap size is too small. This could result in errors like:

                  Error: Out of memory\n

                  A possible solution to this is by setting the maximum heap size to be bigger:

                  export _JAVA_OPTIONS=\"-Xmx512M\"\n
                  "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

                  MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

                  The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

                  You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

                  parpool.m
                  % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

                  See also the parpool documentation.

                  "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

                  Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

                  MATLAB_LOG_DIR=<OUTPUT_DIR>\n

                  where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

                  # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

                  You should remove the directory at the end of your job script:

                  rm -rf $MATLAB_LOG_DIR\n
                  "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

                  When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

                  The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

                  export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

                  So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

                  "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

                  All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

                  jobscript.sh
                  #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
                  "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

                  VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

                  Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

                  Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

                  "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

                  First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

                  $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

                  When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

                  Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

                  It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

                  "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

                  You can get a list of running VNC servers on a node with

                  $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

                  This only displays the running VNC servers on the login node you run the command on.

                  To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

                  $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

                  This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

                  "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

                  The VNC server runs on a (in the example above, on gligar07.gastly.os).

                  In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

                  Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

                  To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

                  The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

                  "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

                  The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

                  The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

                  So, in our running example, both the source and destination ports are 5906.

                  "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

                  In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

                  If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

                  In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

                  To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

                  Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

                  In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

                  We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

                  "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

                  First, we will set up the SSH tunnel from our workstation to .

                  Use the settings specified in the sections above:

                  • source port: the port on which the VNC server is running (see Determining the source/destination port);

                  • destination host: localhost;

                  • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

                  Execute the following command to set up the SSH tunnel.

                  ssh -L 5906:localhost:12345  vsc40000@login.hpc.ugent.be\n

                  Replace the source port 5906, destination port 12345 and user ID vsc40000 with your own!

                  With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

                  Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

                  "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

                  Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

                  You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

                  netstat -an | grep -i listen | grep tcp | grep 12345\n

                  If you see no matching lines, then the port you picked is still available, and you can continue.

                  If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

                  $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
                  "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

                  In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

                  To do this, run the following command:

                  $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

                  With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

                  Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

                  **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

                  As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

                  "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

                  You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file ending in TurboVNC64-2.1.2.dmg (the version number can be different) and execute it.

                  Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

                  When prompted for a password, use the password you used to setup the VNC server.

                  When prompted for default or empty panel, choose default.

                  If you have an empty panel, you can reset your settings with the following commands:

                  xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
                  "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

                  The VNC server can be killed by running

                  vncserver -kill :6\n

                  where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

                  "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

                  You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

                  "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

                  All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

                  See HPC policies for more information on who is entitled to an account.

                  The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

                  There are two methods for connecting to HPC-UGent infrastructure:

                  • Using a terminal to connect via SSH.
                  • Using the web portal

                  The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

                  If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

                  The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

                  "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
                  • an SSH public/private key pair can be seen as a lock and a key

                  • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

                  • the SSH private key is like a physical key: you don't hand it out to other people.

                  • anyone who has the key (and the optional password) can unlock the door and log in to the account.

                  • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

                  Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). To open a Terminal window in macOS, open the Finder and choose

                  >> Applications > Utilities > Terminal

                  Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on macOS is using the OpenSSH client included with macOS, which you can then also use to log on to the clusters.

                  "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

                  Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

                  \"Secure\" means that:

                  1. the User is authenticated to the System; and

                  2. the System is authenticated to the User; and

                  3. all data is encrypted during transfer.

                  OpenSSH is a FREE implementation of the SSH connectivity protocol. macOS comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

                  On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

                  $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

                  To access the clusters and transfer your files, you will use the following commands:

                  1. ssh-keygen: to generate the SSH key pair (public + private key);

                  2. ssh: to open a shell on a remote machine;

                  3. sftp: a secure equivalent of ftp;

                  4. scp: a secure equivalent of the remote copy command rcp.

                  "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

                  A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

                  ls ~/.ssh\n

                  If a key-pair is already available, you would normally get:

                  authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

                  Otherwise, the command will show:

                  ls: .ssh: No such file or directory\n

                  You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

                  You will need to generate a new key pair, when:

                  1. you don't have a key pair yet

                  2. you forgot the passphrase protecting your private key

                  3. your private key was compromised

                  4. your key pair is too short or not the right type

                  For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

                  ssh-keygen -t rsa -b 4096\n

                  This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

                  Without your key pair, you won't be able to apply for a personal VSC account.

                  "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

                  Most recent Unix derivatives include by default an SSH agent to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

                  Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

                  This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

                  ssh-add\n

                  Tip

                  Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

                  Check that your key is available from the keyring with:

                  ssh-add -l\n

                  After these changes the key agent will keep your SSH key to connect to the clusters as usual.

                  Tip

                  You should execute ssh-add command again if you generate a new SSH key.

                  "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

                  Visit https://account.vscentrum.be/

                  You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

                  Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

                  Click Confirm

                  You will now be taken to the authentication page of your institute.

                  You will now have to log in with CAS using your UGent account.

                  You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

                  After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

                  This file has been stored in the directory \"~/.ssh/\".

                  Tip

                  As \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing Cmd+Shift+G (or Cmd+Shift+.), which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"~/.ssh\" and press enter.

                  After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

                  "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

                  Within one day, you should receive a Welcome e-mail with your VSC account details.

                  Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

                  Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

                  "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

                  In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

                  1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

                  2. Go to https://account.vscentrum.be/django/account/edit

                  3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

                  4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

                  5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

                  "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

                  A typical Computation workflow will be:

                  1. Connect to the HPC

                  2. Transfer your files to the HPC

                  3. Compile your code and test it

                  4. Create a job script

                  5. Submit your job

                  6. Wait while

                    1. your job gets into the queue

                    2. your job gets executed

                    3. your job finishes

                  7. Move your results

                  We'll take you through the different tasks one by one in the following chapters.

                  "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

                  AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

                  See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

                  "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

                  This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

                  • AlphaFold website: https://alphafold.com/
                  • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
                  • AlphaFold FAQ: https://alphafold.com/faq
                  • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
                  • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
                  • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
                    • recording available on YouTube
                    • slides available here (PDF)
                    • see also https://www.vscentrum.be/alphafold
                  "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

                  Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

                  $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

                  To use AlphaFold, you should load a particular module, for example:

                  module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

                  We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

                  Warning

                  When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

                  Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

                  $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

                  The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

                  As of writing this documentation the latest version is 20230310.

                  Info

                  The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

                  The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

                  "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

                  The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

                  export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

                  Use newest version

                  Do not forget to replace 20230310 with a more up to date version if available.

                  "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

                  AlphaFold provides a script called run_alphafold.py

                  A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

                  The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

                  Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

                  For more information about the script and options see this section in the official README.

                  READ README

                  It is strongly advised to read the official README provided by DeepMind before continuing.

                  "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

                  The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

                  Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

                  Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

                  Info

                  Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

                  "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

                  The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

                  Using --db_preset=full_dbs, the following runtime data was collected:

                  • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
                  • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
                  • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
                  • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

                  This highlights a couple of important attention points:

                  • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
                  • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
                  • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

                  With --db_preset=casp14, it is clearly more demanding:

                  • On doduo, with 24 cores (1 node): still running after 48h...
                  • On joltik, 1 V100 GPU + 8 cores: 4h 48min

                  This highlights the difference between CPU and GPU performance even more.

                  "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

                  The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

                  Do not forget to set up the environment (see above: Setting up the environment).

                  "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

                  Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

                  >sequence_name\n<SEQUENCE>\n

                  Then run the following command in the same directory:

                  alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

                  See AlphaFold output, for information about the outputs.

                  Info

                  For more scenarios see the example section in the official README.

                  "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

                  The following two example job scripts can be used as a starting point for running AlphaFold.

                  The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

                  To run the job scripts you need to create a file named T1050.fasta with the following content:

                  >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
                  source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

                  "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

                  Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

                  Swap to the joltik GPU before submitting it:

                  module swap cluster/joltik\n
                  AlphaFold-gpu-joltik.sh
                  #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
                  "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

                  Jobscript that runs AlphaFold on CPU using 24 cores on one node.

                  AlphaFold-cpu-doduo.sh
                  #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

                  In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

                  "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

                  Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

                  One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

                  For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

                  This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

                  "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

                  Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

                  The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

                  In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

                  If these limitations are a problem for you, please let us know via hpc@ugent.be.

                  "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

                  All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

                  "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

                  Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

                  Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

                  # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
                  "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

                  For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

                  We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

                  "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

                  Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

                  cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

                  Create a job script like:

                  #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

                  Create an example myscript.sh:

                  #!/bin/bash\n\n# prime factors\nfactor 1234567\n
                  "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

                  We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

                  Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

                  cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
                  #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

                  You can download linear_regression.py from the official Tensorflow repository.

                  "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

                  It is also possible to execute MPI jobs within a container, but the following requirements apply:

                  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

                  • Use modules within the container (install the environment-modules or lmod package in your container)

                  • Load the required module(s) before apptainer execution.

                  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

                  Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

                  cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

                  For example to compile an MPI example:

                  module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

                  Example MPI job script:

                  #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
                  "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
                  1. Before starting, you should always check:

                    • Are there any errors in the script?

                    • Are the required modules loaded?

                    • Is the correct executable used?

                  2. Check your computer requirements upfront, and request the correct resources in your batch job script.

                    • Number of requested cores

                    • Amount of requested memory

                    • Requested network type

                  3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

                  4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

                  5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

                  6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

                  7. Submit your job and wait (be patient) ...

                  8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

                  9. The runtime is limited by the maximum walltime of the queues.

                  10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

                  11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

                  12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

                  "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

                  All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

                  Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

                  "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

                  In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

                  When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

                  "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

                  To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

                  In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

                  In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

                  Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

                  Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

                  Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

                  "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

                  Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

                  All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

                  A typical process looks like:

                  1. Copy your software to the login-node of the HPC

                  2. Start an interactive session on a compute node;

                  3. Compile it;

                  4. Test it locally;

                  5. Generate your job scripts;

                  6. Test it on the HPC

                  7. Run it (in parallel);

                  We assume you've copied your software to the HPC. The next step is to request your private compute node.

                  $ qsub -I\nqsub: waiting for job 123456 to start\n
                  "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

                  Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

                  We now list the directory and explore the contents of the \"hello.c\" program:

                  $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

                  hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

                  The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

                  We first need to compile this C-file into an executable with the gcc-compiler.

                  First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

                  $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

                  A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

                  Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

                  $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

                  It seems to work, now run it on the HPC

                  qsub hello.pbs\n

                  "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

                  List the directory and explore the contents of the \"mpihello.c\" program:

                  $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

                  mpihello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

                  The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

                  Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

                  mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

                  A new file \"hello\" has been created. Note that this program has \"execute\" rights.

                  Let's test this program on the \"login\" node first:

                  $ ./mpihello\nHello World from Node 0.\n

                  It seems to work, now run it on the HPC.

                  qsub mpihello.pbs\n
                  "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

                  We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

                  We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

                  module purge\nmodule load intel\n

                  Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

                  mpiicc -o mpihello mpihello.c\nls -l\n

                  Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

                  $ ./mpihello\nHello World from Node 0.\n

                  It seems to work, now run it on the HPC.

                  qsub mpihello.pbs\n

                  Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

                  Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

                  Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

                  Before you can really start using the HPC clusters, there are several things you need to do or know:

                  1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

                  2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

                  3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

                  4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

                  "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

                  Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

                  VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

                  All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

                  • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

                  • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

                    • While this web connection is active new SSH sessions can be started.

                    • Active SSH sessions will remain active even when this web page is closed.

                  • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

                  Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

                  ssh_exchange_identification: read: Connection reset by peer\n
                  "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

                  The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

                  If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

                  "}, {"location": "connecting/#connect", "title": "Connect", "text": "

                  Open up a terminal and enter the following command to connect to the HPC. You can open a terminal by navigation to Applications and then Utilities in the finder and open Terminal.app, or enter Terminal in Spotlight Search.

                  ssh vsc40000@login.hpc.ugent.be\n

                  Here, user vsc40000 wants to make a connection to the \"hpcugent\" cluster at UGent via the login node \"login.hpc.ugent.be\", so replace vsc40000 with your own VSC id in the above command.

                  The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

                  A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

                  Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

                  In this case, use the -i option for the ssh command to specify the location of your private key. For example:

                  ssh -i /home/example/my_keys\n

                  Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

                  $ pwd\n/user/home/gent/vsc400/vsc40000\n

                  Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

                  $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

                  This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

                  You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

                  As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

                  $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

                  This directory contains:

                  1. This HPC Tutorial (in either a Mac, Linux or Windows version).

                  2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

                  cd examples\n

                  Tip

                  Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

                  Tip

                  For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

                  The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  Go to your home directory, check your own private examples directory, ...\u00a0and start working.

                  cd\nls -l\n

                  Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

                  Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

                  You can exit the connection at anytime by entering:

                  $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

                  tip: Setting your Language right

                  You may encounter a warning message similar to the following one during connecting:

                  perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
                  or any other error message complaining about the locale.

                  This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

                  LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

                  A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

                  Note

                  If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

                  Open the .bashrc on your local machine with your favourite editor and add the following lines:

                  $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

                  tip: vi

                  To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

                  or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

                  echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

                  You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

                  "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

                  Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. macOS ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

                  "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

                  Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

                  It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

                  Open an additional terminal window and check that you're working on your local machine.

                  $ hostname\n<local-machine-name>\n

                  If you're still using the terminal that is connected to the HPC, close the connection by typing \"exit\" in the terminal window.

                  For example, we will copy the (local) file \"localfile.txt\" to your home directory on the HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc40000\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc40000@login.hpc.ugent.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

                  $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc40000@login.hpc.ugent.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

                  Connect to the HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

                  $ pwd\n/user/home/gent/vsc400/vsc40000\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

                  The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-macOS-Gent.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

                  First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

                  $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc40000 Sep 11 09:53 intro-HPC-macOS-Gent.pdf\n

                  Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

                  $ scp vsc40000@login.hpc.ugent.be:./docs/intro-HPC-macOS-Gent.pdf .\nintro-HPC-macOS-Gent.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

                  The file has been copied from the HPC to your local computer.

                  It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

                  scp -r dataset vsc40000@login.hpc.ugent.be:scratch\n

                  If you don't use the -r option to copy a directory, you will run into the following error:

                  $ scp dataset vsc40000@login.hpc.ugent.be:scratch\ndataset: not a regular file\n
                  "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

                  The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

                  The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

                  One easy way of starting a sftp session is

                  sftp vsc40000@login.hpc.ugent.be\n

                  Typical and popular commands inside an sftp session are:

                  cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the HPC remote machine) ls Get a list of the files in the current directory on the HPC. get fibo.py Copy the file \"fibo.py\" from the HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the HPC."}, {"location": "connecting/#using-a-gui-cyberduck", "title": "Using a GUI (Cyberduck)", "text": "

                  Cyberduck is a graphical alternative to the scp command. It can be installed from https://cyberduck.io.

                  This is the one-time setup you will need to do before connecting:

                  1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the \"+\" sign on the bottom left of the window. A new window will open.

                  2. In the drop-down menu on top, select \"SFTP (SSH File Transfer Protocol)\".

                  3. In the \"Server\" field, type in login.hpc.ugent.be. In the \"Username\" field, type in your VSC account id (this looks like vsc40000).

                  4. Select the location of your SSH private key in the \"SSH Private Key\" field.

                  5. Finally, type in a name for the bookmark in the \"Nickname\" field and close the window by pressing on the red circle in the top left corner of the window.

                  To open the connection, click on the \"Bookmarks\" icon (which resembles an open book) and double-click on the bookmark you just created.

                  "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

                  See the section on rsync in chapter 5 of the Linux intro manual.

                  "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

                  It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

                  For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

                  ssh gligar07.gastly.os\n
                  This is also possible the other way around.

                  If you want to find out which login host you are connected to, you can use the hostname command.

                  $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

                  Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

                  • screen
                  • tmux
                  "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

                  It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

                  In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

                  Check if any cron script is already set in the current login node with:

                  crontab -l\n

                  At this point you can add/edit (with vi editor) any cron script running the command:

                  crontab -e\n
                  "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
                   15 5 * * * ~/runscript.sh >& ~/job.out\n

                  where runscript.sh has these lines in this example:

                  runscript.sh
                  #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

                  In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

                  Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

                  ssh gligar07    # or gligar08\n
                  "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

                  You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

                  EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

                  "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

                  For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

                  • applying custom patches to the software that only you or your group are using

                  • evaluating new software versions prior to requesting a central software installation

                  • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

                  "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

                  Before you use EasyBuild, you need to configure it:

                  "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

                  This is where EasyBuild can find software sources:

                  EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
                  • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

                  • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

                  "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

                  This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

                  export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

                  On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

                  "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

                  This is where EasyBuild will install the software (and accompanying modules) to.

                  For example, to let it use $VSC_DATA/easybuild, use:

                  export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

                  Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

                  Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

                  To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

                  "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

                  Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

                  module load EasyBuild\n
                  "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

                  EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

                  $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

                  For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

                  eb example-1.2.1-foss-2024a.eb --robot\n
                  "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

                  To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

                  To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

                  eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

                  To try to install example v1.2.5 with a different compiler toolchain:

                  eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
                  "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

                  To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

                  "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

                  To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

                  module use $EASYBUILD_INSTALLPATH/modules/all\n

                  It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

                  "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

                  As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

                  Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

                  Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

                  There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

                  Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

                  Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

                  This chapter shows you how to measure:

                  1. Walltime
                  2. Memory usage
                  3. CPU usage
                  4. Disk (storage) needs
                  5. Network bottlenecks

                  First, we allocate a compute node and move to our relevant directory:

                  qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
                  "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

                  One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

                  The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

                  Test the time command:

                  $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

                  It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

                  It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

                  The walltime can be specified in a job scripts as:

                  #PBS -l walltime=3:00:00:00\n

                  or on the command line

                  qsub -l walltime=3:00:00:00\n

                  It is recommended to always specify the walltime for a job.

                  "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

                  In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

                  "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

                  The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

                  $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

                  Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

                  It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

                  On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

                  "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

                  To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

                  top

                  provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

                  htop

                  is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

                  "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

                  Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

                  The maximum amount of physical memory used by the job per node can be specified in a job script as:

                  #PBS -l mem=4gb\n

                  or on the command line

                  qsub -l mem=4gb\n
                  "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

                  Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

                  "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

                  The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

                  The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

                  $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

                  Or if you want to see it in a more readable format, execute:

                  $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

                  Note

                  Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

                  In order to specify the number of nodes and the number of processors per node in your job script, use:

                  #PBS -l nodes=N:ppn=M\n

                  or with equivalent parameters on the command line

                  qsub -l nodes=N:ppn=M\n

                  This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

                  You can also use this statement in your job script:

                  #PBS -l nodes=N:ppn=all\n

                  to request all cores of a node, or

                  #PBS -l nodes=N:ppn=half\n

                  to request half of them.

                  Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

                  "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

                  This could also be monitored with the htop command:

                  htop\n
                  Example output:
                    1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

                  The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

                  If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

                  "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

                  It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

                  But how can you maximise?

                  1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
                  2. Develop your parallel program in a smart way.
                  3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
                  4. Correct your request for CPUs in your job script.
                  "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

                  On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

                  The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

                  The load averages differ from CPU percentage in two significant ways:

                  1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
                  2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
                  "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

                  What is the \"optimal load\" rule of thumb?

                  The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

                  In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

                  Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

                  The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

                  1. When you are running computational intensive applications, one application per processor will generate the optimal load.
                  2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

                  The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

                  The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

                  "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

                  The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

                  The uptime command will show us the average load

                  $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

                  Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

                  $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
                  You can also read it in the htop command.

                  "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

                  It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

                  But how can you maximise?

                  1. Profile your software to improve its performance.
                  2. Configure your software (e.g., to exactly use the available amount of processors in a node).
                  3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
                  4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
                  5. Correct your request for CPUs in your job script.

                  And then check again.

                  "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

                  Some programs generate intermediate or output files, the size of which may also be a useful metric.

                  Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

                  It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

                  Several actions can be taken, to avoid storage problems:

                  1. Be aware of all the files that are generated by your program. Also check out the hidden files.
                  2. Check your quota consumption regularly.
                  3. Clean up your files regularly.
                  4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
                  5. Make sure your programs clean up their temporary files after execution.
                  6. Move your output results to your own computer regularly.
                  7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
                  "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

                  Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

                  Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

                  The parameter to add in your job script would be:

                  #PBS -l ib\n

                  If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

                  #PBS -l gbe\n
                  "}, {"location": "getting_started/", "title": "Getting Started", "text": "

                  Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

                  In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

                  Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

                  "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

                  To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

                  If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

                  "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
                  1. Connect to the login nodes
                  2. Transfer your files to the HPC-UGent infrastructure
                  3. Optional: compile your code and test it
                  4. Create a job script and submit your job
                  5. Wait for job to be executed
                  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

                  We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

                  "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

                  There are two options to connect

                  • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
                  • Using the web portal

                  Considering your operating system is macOS, it should be easy to make use of the ssh command in a terminal, but the web portal will work too.

                  The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

                  See shell access when using the web portal, or connection to the HPC-UGent infrastructure when using a terminal.

                  Make sure you can get to a shell access to the HPC-UGent infrastructure before proceeding with the next steps.

                  Info

                  When having problems see the connection issues section on the troubleshooting page.

                  "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

                  Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

                  Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

                  On your local machine you can run:

                  curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

                  Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

                  scp tensorflow_mnist.py run.sh vsc40000login.hpc.ugent.be:~\n

                  ssh  vsc40000@login.hpc.ugent.be\n

                  User your own VSC account id

                  Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

                  Info

                  For more information about transfering files or scp, see tranfer files from/to hpc.

                  When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

                  $ ls ~\nrun.sh tensorflow_mnist.py\n

                  When you do not see these files, make sure you uploaded the files to your home directory.

                  "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

                  Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

                  A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

                  Our job script looks like this:

                  run.sh

                  #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
                  As you can see this job script will run the Python script named tensorflow_mnist.py.

                  The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

                  module swap cluster/donphan\n

                  Tip

                  When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

                  To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

                  This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

                  $ qsub run.sh\n123456\n

                  This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

                  Make sure you understand what the module command does

                  Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

                  It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

                  When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

                  For detailed information about module commands, read the running batch jobs chapter.

                  "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

                  Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

                  You can get an overview of the active jobs using the qstat command:

                  $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

                  Eventually, after entering qstat again you should see that your job has started running:

                  $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

                  If you don't see your job in the output of the qstat command anymore, your job has likely completed.

                  Read this section on how to interpret the output.

                  "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

                  When your job finishes it generates 2 output files:

                  • One for normal output messages (stdout output channel).
                  • One for warning and error messages (stderr output channel).

                  By default located in the directory where you issued qsub.

                  Info

                  For more information about the stdout and stderr output channels, see this section.

                  In our example when running ls in the current directory you should see 2 new files:

                  • run.sh.o123456, containing normal output messages produced by job 123456;
                  • run.sh.e123456, containing errors and warnings produced by job 123456.

                  Info

                  run.sh.e123456 should be empty (no errors or warnings).

                  Use your own job ID

                  Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

                  When examining the contents of run.sh.o123456 you will see something like this:

                  Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

                  Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

                  Warning

                  When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

                  For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

                  "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
                  • Running interactive jobs
                  • Running jobs with input/output data
                  • Multi core jobs/Parallel Computing
                  • Interactive and debug cluster

                  For more examples see Program examples and Job script examples

                  "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

                  To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

                  module swap cluster/joltik\n

                  To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

                  module swap cluster/accelgor\n

                  Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

                  "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

                  To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

                  Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

                  "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

                  See https://www.ugent.be/hpc/en/infrastructure.

                  "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

                  There are 2 main ways to ask for GPUs as part of a job:

                  • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

                  • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

                  Some background:

                  • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

                  • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

                  "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

                  Some important attention points:

                  • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

                  • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

                  • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

                  • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

                  "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

                  Use module avail to check for centrally installed software.

                  The subsections below only cover a couple of installed software packages, more are available.

                  "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

                  Please consult module avail GROMACS for a list of installed versions.

                  "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

                  Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

                  Please consult module avail Horovod for a list of installed versions.

                  Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

                  At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

                  "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

                  Please consult module avail PyTorch for a list of installed versions.

                  "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

                  Please consult module avail TensorFlow for a list of installed versions.

                  Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

                  "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
                  #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
                  "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

                  Please consult module avail AlphaFold for a list of installed versions.

                  For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

                  "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

                  In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

                  "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

                  The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

                  This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

                  Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

                  • Interactive jobs (see chapter\u00a0Running interactive jobs)

                  • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

                  • Jobs requiring few resources

                  • Debugging programs

                  • Testing and debugging job scripts

                  "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

                  To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

                  module swap cluster/donphan\n

                  Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

                  "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

                  Some limits are in place for this cluster:

                  • each user may have at most 5 jobs in the queue (both running and waiting to run);

                  • at most 3 jobs per user can be running at the same time;

                  • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

                  In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

                  Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

                  "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

                  Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

                  All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

                  "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

                  \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

                  While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

                  A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

                  The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

                  Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

                  Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

                  "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

                  The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

                  The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

                  The HPC currently consists of:

                  a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

                  Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

                  "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

                  The HPC infrastructure is not a magic computer that automatically:

                  1. runs your PC-applications much faster for bigger problems;

                  2. develops your applications;

                  3. solves your bugs;

                  4. does your thinking;

                  5. ...

                  6. allows you to play games even faster.

                  The HPC does not replace your desktop computer.

                  "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

                  Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

                  It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

                  "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

                  In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

                  Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

                  "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

                  Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

                  Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

                  The two parallel programming paradigms most used in HPC are:

                  • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

                  • MPI for distributed memory systems (multiprocessing): on multiple nodes

                  Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

                  "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

                  Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

                  It is perfectly possible to also run purely sequential programs on the HPC.

                  Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

                  "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

                  You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

                  For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

                  Supported and commonly used compilers are GCC and Intel.

                  Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

                  "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

                  All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

                  Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

                  A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

                  "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

                  A typical workflow looks like:

                  1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

                  2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

                  3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

                  4. Create a job script and submit your job (see Running batch jobs)

                  5. Get some coffee and be patient:

                    1. Your job gets into the queue

                    2. Your job gets executed

                    3. Your job finishes

                  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

                  "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

                  When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

                  Do not hesitate to contact the HPC staff for any help.

                  1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

                  "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

                  This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

                  • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

                  • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

                  • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

                  • To use a situational parameter, remove one '#' at the beginning of the line.

                  simple_jobscript.sh
                  #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
                  "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

                  Here's an example of a single-core job script:

                  single_core.sh
                  #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
                  1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

                  2. A module for Python 3.6 is loaded, see also section Modules.

                  3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

                  4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

                  5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

                  "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

                  Here's an example of a multi-core job script that uses mympirun:

                  multi_core.sh
                  #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

                  An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

                  "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

                  If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

                  This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

                  timeout.sh
                  #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

                  The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

                  example_program.sh
                  #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
                  "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

                  A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

                  "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

                  Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

                  After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

                  When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

                  and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

                  This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

                  "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

                  A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

                  To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

                  We can see all available versions of the SciPy module by using module avail SciPy-bundle:

                  $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

                  Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

                  Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

                  $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

                  The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

                  It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

                  $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
                  This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

                  If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

                  $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

                  Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

                  "}, {"location": "known_issues/", "title": "Known issues", "text": "

                  This page provides details on a couple of known problems, and the workarounds that are available for them.

                  If you have any questions related to these issues, please contact the HPC-UGent team.

                  • Operation not permitted error for MPI applications
                  "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

                  When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

                  Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

                  This error means that an internal problem has occurred in OpenMPI.

                  "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

                  This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

                  It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

                  "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

                  We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

                  "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

                  A workaround as been implemented in mympirun (version 5.4.0).

                  Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

                  module load vsc-mympirun\n

                  and launch your MPI application using the mympirun command.

                  For more information, see the mympirun documentation.

                  "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

                  If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

                  export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
                  "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

                  We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

                  "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

                  There are two important motivations to engage in parallel programming.

                  1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

                  2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

                  On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

                  There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

                  Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

                  Tip

                  You can request more nodes/cores by adding following line to your run script.

                  #PBS -l nodes=2:ppn=10\n
                  This queues a job that claims 2 nodes and 10 cores.

                  Warning

                  Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

                  Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

                  This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

                  Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

                  Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

                  Go to the example directory:

                  cd ~/examples/Multi-core-jobs-Parallel-Computing\n

                  Note

                  If the example directory is not yet present, copy it to your home directory:

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  Study the example first:

                  T_hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

                  And compile it (whilst including the thread library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

                  Now, run it on the cluster and check the output:

                  $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

                  Tip

                  If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

                  OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

                  An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

                  Here is the general code structure of an OpenMP program:

                  #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

                  "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

                  By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

                  "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

                  Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

                  omp1.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
                  "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

                  Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

                  omp2.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
                  "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

                  Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

                  omp3.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
                  "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

                  There are a host of other directives you can issue using OpenMP.

                  Some other clauses of interest are:

                  1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

                  2. nowait: threads will not wait until everybody is finished

                  3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

                  4. if: allows you to parallelise only if a certain condition is met

                  5. ...\u00a0and a host of others

                  Tip

                  If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

                  The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

                  In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

                  The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

                  The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

                  One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

                  Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

                  Study the MPI-programme and the PBS-file:

                  mpi_hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
                  mpi_hello.pbs
                  #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

                  and compile it:

                  $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

                  mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

                  Run the parallel program:

                  $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

                  The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

                  MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

                  Tip

                  mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

                  You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

                  Tip

                  If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

                  "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

                  A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

                  Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

                  These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

                  One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

                  The \"Worker framework\" has been developed to address this issue.

                  It can handle many small jobs determined by:

                  parameter variations

                  i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

                  job arrays

                  i.e., each individual job got a unique numeric identifier.

                  Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

                  However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

                  "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

                  First go to the right directory:

                  cd ~/examples/Multi-job-submission/par_sweep\n

                  Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

                  $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

                  For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

                  par_sweep/weather
                  #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

                  A job script that would run this as a job for the first parameters (p01) would then look like:

                  par_sweep/weather_p01.pbs
                  #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

                  When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

                  To submit the job, the user would use:

                   $ qsub weather_p01.pbs\n
                  However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

                  $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

                  It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

                  In order to make our PBS generic, the PBS file can be modified as follows:

                  par_sweep/weather.pbs
                  #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

                  Note that:

                  1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

                  2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

                  3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

                  The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

                  The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

                  $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

                  Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

                  Warning

                  When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

                  module swap env/slurm/donphan\n

                  instead of

                  module swap cluster/donphan\n
                  We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

                  "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

                  First go to the right directory:

                  cd ~/examples/Multi-job-submission/job_array\n

                  As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

                  The following bash script would submit these jobs all one by one:

                  #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

                  This, as said before, could be disturbing for the job scheduler.

                  Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

                  Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

                  The details are

                  1. a job is submitted for each number in the range;

                  2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

                  3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

                  The job could have been submitted using:

                  qsub -t 1-100 my_prog.pbs\n

                  The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

                  To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

                  A typical job script for use with job arrays would look like this:

                  job_array/job_array.pbs
                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

                  In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

                  Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

                  $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

                  For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

                  job_array/test_set
                  #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

                  Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

                  job_array/test_set.pbs
                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

                  Note that

                  1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

                  2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

                  The job is now submitted as follows:

                  $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

                  The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

                  Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

                  $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
                  "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

                  Often, an embarrassingly parallel computation can be abstracted to three simple steps:

                  1. a preparation phase in which the data is split up into smaller, more manageable chunks;

                  2. on these chunks, the same algorithm is applied independently (these are the work items); and

                  3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

                  The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

                  cd ~/examples/Multi-job-submission/map_reduce\n

                  The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

                  First study the scripts:

                  map_reduce/pre.sh
                  #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
                  map_reduce/post.sh
                  #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

                  Then one can submit a MapReduce style job as follows:

                  $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

                  Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

                  "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

                  The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

                  The \"Worker Framework\" will be effective when

                  1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

                  2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

                  "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

                  Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

                  tail -f run.pbs.log123456\n

                  Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

                  watch -n 60 wsummarize run.pbs.log123456\n

                  This will summarise the log file every 60 seconds.

                  "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

                  Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

                  Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

                  Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

                  "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

                  Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

                  wresume -jobid 123456\n

                  This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

                  wresume -l walltime=1:30:00 -jobid 123456\n

                  Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

                  wresume -jobid 123456 -retry\n

                  By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

                  "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

                  This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

                  $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
                  "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

                  When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

                  to check for the available versions of worker, use the following command:

                  $ module avail worker\n
                  1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

                  "}, {"location": "mympirun/", "title": "Mympirun", "text": "

                  mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

                  In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

                  "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

                  Before using mympirun, we first need to load its module:

                  module load vsc-mympirun\n

                  As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

                  The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

                  For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

                  "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

                  There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

                  By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

                  "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

                  This is the most commonly used option for controlling the number of processing.

                  The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

                  $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
                  "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

                  There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

                  See vsc-mympirun README for a detailed explanation of these options.

                  "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

                  You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

                  $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
                  "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

                  In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

                  "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

                  There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

                  • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

                    • see also http://openfoam.com/history/
                  • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

                    • see also https://openfoam.org/download/history/
                  • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

                  Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

                  "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

                  The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

                  • OpenFOAM websites:

                    • https://openfoam.com

                    • https://openfoam.org

                    • http://wikki.gridcore.se/foam-extend

                  • OpenFOAM user guides:

                    • https://www.openfoam.com/documentation/user-guide

                    • https://cfd.direct/openfoam/user-guide/

                  • OpenFOAM C++ source code guide: https://cpp.openfoam.org

                  • tutorials: https://wiki.openfoam.com/Tutorials

                  • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

                  Other useful OpenFOAM documentation:

                  • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

                  • http://www.dicat.unige.it/guerrero/openfoam.html

                  "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

                  To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

                  "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

                  First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

                  $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

                  To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

                  To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

                  module load OpenFOAM/11-foss-2023a\n
                  "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

                  OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

                  source $FOAM_BASH\n
                  "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

                  If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

                  source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

                  Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

                  "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

                  If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

                  unset $FOAM_SIGFPE\n

                  Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

                  As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

                  "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

                  The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

                  • generate the mesh;

                  • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

                  After running the simulation, some post-processing steps are typically performed:

                  • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

                  • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

                  Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

                  Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

                  One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

                  For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

                  "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

                  For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

                  "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

                  When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

                  You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

                  "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

                  It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

                  See Basic usage for how to get started with mympirun.

                  To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

                  export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

                  Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

                  "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

                  To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

                  Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

                  number of processor directories = 4 is not equal to the number of processors = 16\n

                  In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

                  • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

                  • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

                  See Controlling number of processes to control the number of process mympirun will start.

                  This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

                  To visualise the processor domains, use the following command:

                  mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

                  and then load the VTK files generated in the VTK folder into ParaView.

                  "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

                  OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

                  Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

                  • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

                  • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

                  • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

                  • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

                  • if the results per individual time step are large, consider setting writeCompression to true;

                  For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

                  For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

                  These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

                  "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

                  See https://cfd.direct/openfoam/user-guide/compiling-applications/.

                  "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

                  Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

                  OpenFOAM_damBreak.sh
                  #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
                  "}, {"location": "program_examples/", "title": "Program examples", "text": "

                  If you have not done so already copy our examples to your home directory by running the following command:

                   cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  ~(tilde) refers to your home directory, the directory you arrive by default when you login.

                  Go to our examples:

                  cd ~/examples/Program-examples\n

                  Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

                  1. 01_Python

                  2. 02_C_C++

                  3. 03_Matlab

                  4. 04_MPI_C

                  5. 05a_OMP_C

                  6. 05b_OMP_FORTRAN

                  7. 06_NWChem

                  8. 07_Wien2k

                  9. 08_Gaussian

                  10. 09_Fortran

                  11. 10_PQS

                  The above 2 OMP directories contain the following examples:

                  C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

                  Compile by any of the following commands:

                  Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

                  Be invited to explore the examples.

                  "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

                  Remember to substitute the usernames, login nodes, file names, ...for your own.

                  Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

                  Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

                  "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

                  Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

                  This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

                  It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

                  "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

                  As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

                  For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

                  $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

                  Initially there will be only one RHEL 9 login node. As needed a second one will be added.

                  When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

                  "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

                  To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

                  This includes (per user):

                  • max. of 2 CPU cores in use
                  • max. 8 GB of memory in use

                  For more intensive tasks you can use the interactive and debug clusters through the web portal.

                  "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

                  The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

                  However, there will be impact on the availability of software that is made available via modules.

                  Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

                  This includes all software installations on top of a compiler toolchain that is older than:

                  • GCC(core)/12.3.0
                  • foss/2023a
                  • intel/2023a
                  • gompi/2023a
                  • iimpi/2023a
                  • gfbf/2023a

                  (or another toolchain with a year-based version older than 2023a)

                  The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

                  foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

                  If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

                  It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

                  "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

                  We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

                  cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

                  Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

                  We will keep this page up to date when more specific dates have been planned.

                  Warning

                  This planning below is subject to change, some clusters may get migrated later than originally planned.

                  Please check back regularly.

                  "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

                  If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

                  "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

                  In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

                  When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

                  The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

                  "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

                  Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

                  "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

                  The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

                  All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

                  "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

                  In order to administer the active software and their environment variables, the module system has been developed, which:

                  1. Activates or deactivates software packages and their dependencies.

                  2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

                  3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

                  4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

                  5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

                  This is all managed with the module command, which is explained in the next sections.

                  There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

                  "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

                  A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

                  module available\n

                  It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

                  This will give some output such as:

                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

                  This gives a full list of software packages that can be loaded.

                  The casing of module names is important: lowercase and uppercase letters matter in module names.

                  "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

                  The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

                  Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

                  E.g., foss/2024a is the first version of the foss toolchain in 2024.

                  The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

                  "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

                  To \"activate\" a software package, you load the corresponding module file using the module load command:

                  module load example\n

                  This will load the most recent version of example.

                  For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

                  **However, you should specify a particular version to avoid surprises when newer versions are installed:

                  module load secondexample/2.7-intel-2016b\n

                  The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

                  Modules need not be loaded one by one; the two module load commands can be combined as follows:

                  module load example/1.2.3 secondexample/2.7-intel-2016b\n

                  This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

                  "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

                  Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

                  $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

                  You can also just use the ml command without arguments to list loaded modules.

                  It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

                  "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

                  To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

                  $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

                  To unload the secondexample module, you can also use ml -secondexample.

                  Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

                  "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

                  In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

                  module purge\n
                  This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

                  "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

                  Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

                  Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

                  module load example\n

                  rather than

                  module load example/1.2.3\n

                  Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

                  Consider the following example modules:

                  $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

                  Let's now generate a version conflict with the example module, and see what happens.

                  $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

                  Note: A module swap command combines the appropriate module unload and module load commands.

                  "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

                  With the module spider command, you can search for modules:

                  $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

                  It's also possible to get detailed information about a specific module:

                  $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
                  "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

                  To get a list of all possible commands, type:

                  module help\n

                  Or to get more information about one specific module package:

                  $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
                  "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

                  If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

                  In each module command shown below, you can replace module with ml.

                  First, load all modules you want to include in the collections:

                  module load example/1.2.3 secondexample/2.7-intel-2016b\n

                  Now store it in a collection using module save. In this example, the collection is named my-collection.

                  module save my-collection\n

                  Later, for example in a jobscript or a new session, you can load all these modules with module restore:

                  module restore my-collection\n

                  You can get a list of all your saved collections with the module savelist command:

                  $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

                  To get a list of all modules a collection will load, you can use the module describe command:

                  $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

                  To remove a collection, remove the corresponding file in $HOME/.lmod.d:

                  rm $HOME/.lmod.d/my-collection\n
                  "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

                  To see how a module would change the environment, you can use the module show command:

                  $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

                  It's also possible to use the ml show command instead: they are equivalent.

                  Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

                  You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

                  If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

                  "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

                  To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

                  "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

                  You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

                  You can also get this information in text form (per cluster separately) with the pbsmon command:

                  $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

                  pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

                  "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

                  Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

                  As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

                  Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

                  cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  First go to the directory with the first examples by entering the command:

                  cd ~/examples/Running-batch-jobs\n

                  Each time you want to execute a program on the HPC you'll need 2 things:

                  The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

                  A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

                  1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

                  Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

                  List and check the contents with:

                  $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

                  In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

                  1. The Perl script calculates the first 30 Fibonacci numbers.

                  2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

                  We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

                  On the command line, you would run this using:

                  $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

                  Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

                  The job script contains a description of the job by specifying the command that need to be executed on the compute node:

                  fibo.pbs
                  #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

                  So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

                  This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

                  $ qsub fibo.pbs\n123456\n

                  The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

                  Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

                  To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

                  Your job is now waiting in the queue for a free workernode to start on.

                  Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

                  After your job was started, and ended, check the contents of the directory:

                  $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

                  Explore the contents of the 2 new files:

                  $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

                  These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

                  "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

                  In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

                  The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

                  • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

                  • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

                  • Time waiting in queue: queued jobs get a higher priority over time.

                  • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

                  • Whether or not you are a member of a Virtual Organisation (VO).

                    Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

                    If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

                    See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

                  Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

                  It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

                  Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

                  "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

                  To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

                  By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

                  module swap cluster/donphan\n

                  Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

                  To list the available cluster modules, you can use the module avail cluster/ command:

                  $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

                  As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

                  The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

                  "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

                  It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

                  To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

                  Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

                  env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

                  We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

                  We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

                  "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

                  Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

                  qstat 12345\n

                  To show on which compute nodes your job is running, at least, when it is running:

                  qstat -n 12345\n

                  To remove a job from the queue so that it will not run, or to stop a job that is already running.

                  qdel 12345\n

                  When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

                  $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

                  Here:

                  Job ID the job's unique identifier

                  Name the name of the job

                  User the user that owns the job

                  Time Use the elapsed walltime for the job

                  Queue the queue the job is in

                  The state S can be any of the following:

                  State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

                  User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

                  "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

                  There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

                  "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

                  Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

                  It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

                  "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

                  The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

                  qsub -l walltime=2:30:00 ...\n

                  For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

                  The maximum walltime for HPC-UGent clusters is 72 hours.

                  If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

                  qsub -l mem=4gb ...\n

                  The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

                  The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

                  qsub -l nodes=5:ppn=2 ...\n

                  The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

                  qsub -l nodes=1:westmere\n

                  The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

                  These options can either be specified on the command line, e.g.

                  qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

                  or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

                  Note that the resources requested on the command line will override those specified in the PBS file.

                  "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

                  At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

                  When you navigate to that directory and list its contents, you should see them:

                  $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

                  In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

                  Inspect the generated output and error files:

                  $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
                  "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

                  You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

                  #PBS -m b \n#PBS -m e \n#PBS -m a\n

                  or

                  #PBS -m abe\n

                  These options can also be specified on the command line. Try it and see what happens:

                  qsub -m abe fibo.pbs\n

                  The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

                  qsub -m b -M john.smith@example.com fibo.pbs\n

                  will send an e-mail to john.smith@example.com when the job begins.

                  "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

                  If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

                  So the following example might go wrong:

                  $ qsub job1.sh\n$ qsub job2.sh\n

                  You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

                  $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

                  afterok means \"After OK\", or in other words, after the first job successfully completed.

                  It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

                  1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

                  "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

                  Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

                  Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

                  Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

                  The syntax for qsub for submitting an interactive PBS job is:

                  $ qsub -I <... pbs directives ...>\n
                  "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

                  Tip

                  Find the code in \"~/examples/Running_interactive_jobs\"

                  First of all, in order to know on which computer you're working, enter:

                  $ hostname -f\ngligar07.gastly.os\n

                  This means that you're now working on the login node gligar07.gastly.os of the cluster.

                  The most basic way to start an interactive job is the following:

                  $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

                  There are two things of note here.

                  1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

                  2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

                  In order to know on which compute-node you're working, enter again:

                  $ hostname -f\nnode3501.doduo.gent.vsc\n

                  Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

                  Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

                  $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

                  You can exit the interactive session with:

                  $ exit\n

                  Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

                  You can work for 3 hours by:

                  qsub -I -l walltime=03:00:00\n

                  If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

                  "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

                  To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

                  The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

                  Download the latest version of the XQuartz package on: http://xquartz.macosforge.org/landing/ and install the XQuartz.pkg package.

                  The installer will take you through the installation procedure, just continue clicking Continue on the various screens that will pop-up until your installation was successful.

                  A reboot is required before XQuartz will correctly open graphical applications.

                  "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

                  We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

                  Now run the message program:

                  cd ~/examples/Running_interactive_jobs\n./message.py\n

                  You should see the following message appearing.

                  Click any button and see what happens.

                  -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
                  "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

                  You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

                  "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

                  First go to the directory:

                  cd ~/examples/Running_jobs_with_input_output_data\n

                  Note

                  If the example directory is not yet present, copy it to your home directory:

                  ```

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

                  List and check the contents with:

                  $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

                  Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

                  file1.py
                  #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

                  The code of the Python script, is self explanatory:

                  1. In step 1, we write something to the file hello.txt in the current directory.

                  2. In step 2, we write some text to stdout.

                  3. In step 3, we write to stderr.

                  Check the contents of the first job script:

                  file1a.pbs
                  #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

                  You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

                  Submit it:

                  qsub file1a.pbs\n

                  After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

                  $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

                  Some observations:

                  1. The file Hello.txt was created in the current directory.

                  2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

                  3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

                  Inspect their contents ...\u00a0and remove the files

                  $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

                  Tip

                  Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

                  "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

                  Check the contents of the job script and execute it.

                  file1b.pbs
                  #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

                  Inspect the contents again ...\u00a0and remove the generated files:

                  $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

                  Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

                  "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

                  You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

                  file1c.pbs
                  #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
                  "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

                  The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

                  Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

                  The following locations are available:

                  Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

                  Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

                  We elaborate more on the specific function of these locations in the following sections.

                  Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

                  For documentation about VO directories, see the section on VO directories.

                  "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

                  Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

                  The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

                  The operating system also creates a few files and folders here to manage your account. Examples are:

                  File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

                  In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

                  The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

                  If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

                  To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

                  You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

                  Each type of scratch has its own use:

                  Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

                  Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

                  Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

                  Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

                  In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

                  $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

                  Now you should be able to access your files running

                  $ ls /UGent/yourugentusername\nhome shares www\n

                  Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

                  If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

                  kinit yourugentusername@UGENT.BE -r 7\n

                  You can verify your authentication ticket and expiry dates yourself by running klist

                  $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

                  Your ticket is valid for 10 hours, but you can renew it before it expires.

                  To renew your tickets, simply run

                  kinit -R\n

                  If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

                  krenew -b -K 60\n

                  Each hour the process will check if your ticket should be renewed.

                  We strongly advise to disable access to your shares once it is no longer needed:

                  kdestroy\n

                  If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

                  conda deactivate\n
                  "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

                  In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

                  $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

                  Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

                  $ ssh globus01 destroy\nSuccesfully destroyed session\n
                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

                  Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

                  To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

                  The rules are:

                  1. You will only receive a warning when you have reached the soft limit of either quota.

                  2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

                  3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

                  We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

                  "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

                  Tip

                  Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

                  In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

                  1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

                  2. repeat this action 30.000 times;

                  3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

                  $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

                  Tip

                  Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

                  In this exercise, you will

                  1. Generate the file \"primes_1.txt\" again as in the previous exercise;

                  2. open the file;

                  3. read it line by line;

                  4. calculate the average of primes in the line;

                  5. count the number of primes found per line;

                  6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job:

                  $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

                  The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

                  • the amount of disk space; and

                  • the number of files

                  that can be made available to each individual HPC user.

                  The quota of disk space and number of files for each HPC user is:

                  Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

                  Tip

                  The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

                  Tip

                  If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

                  "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

                  You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

                  VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

                  To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

                  Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

                  $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

                  This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

                  If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

                  $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

                  If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

                  $ du -s\n5632 .\n$ du -s -h\n

                  If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

                  $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

                  Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

                  $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
                  "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

                  Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

                  Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

                  To change the group of a directory and it's underlying directories and files, you can use:

                  chgrp -R groupname directory\n
                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
                  1. Get the group name you want to belong to.

                  2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
                  1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

                  2. Fill out the group name. This cannot contain spaces.

                  3. Put a description of your group in the \"Info\" field.

                  4. You will now be a member and moderator of your newly created group.

                  "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

                  Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

                  "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

                  You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

                  $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

                  We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

                  "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

                  A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
                  1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

                  2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
                  1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

                  2. Fill why you want to request a VO.

                  3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

                  4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

                  5. If the request is approved, you will now be a member and moderator of your newly created VO.

                  "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

                  If you're a moderator of a VO, you can request additional quota for the VO and its members.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

                  2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

                  3. Add a comment explaining why you need additional storage space and submit the form.

                  4. An HPC administrator will review your request and approve or deny it.

                  "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

                  VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

                  Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

                  2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

                  "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

                  When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

                  VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

                  VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

                  If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

                  "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

                  A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

                  "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

                  This section will explain how to create, activate, use and deactivate Python virtual environments.

                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

                  A Python virtual environment can be created with the following command:

                  python -m venv myenv      # Create a new virtual environment named 'myenv'\n

                  This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

                  Warning

                  When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

                  "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

                  To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

                  source myenv/bin/activate                    # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

                  After activating the virtual environment, you can install additional Python packages with pip install:

                  pip install example_package1\npip install example_package2\n

                  These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

                  It is now possible to run Python scripts that use the installed packages in the virtual environment.

                  Tip

                  When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

                  Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

                  To check if a package is available as a module, use:

                  module av package_name\n

                  Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

                  module show module_name\n

                  to check which extensions are included in a module (if any).

                  "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

                  Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

                  example.py
                  import example_package1\nimport example_package2\n...\n
                  python example.py\n
                  "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

                  When you are done using the virtual environment, you can deactivate it. To do that, run:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

                  You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\n...\n

                  We load a PyTorch package as a module and install Poutyne in a virtual environment:

                  module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

                  While the virtual environment is activated, we can run the script without any issues:

                  python pytorch_poutyne.py\n

                  Deactivate the virtual environment when you are done:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

                  To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

                  module swap cluster/donphan\nqsub -I\n

                  After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

                  Naming a virtual environment

                  When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

                  python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
                  "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

                  This section will combine the concepts discussed in the previous sections to:

                  1. Create a virtual environment on a specific cluster.
                  2. Combine packages installed in the virtual environment with modules.
                  3. Submit a job script that uses the virtual environment.

                  The example script that we will run is the following:

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

                  First, we create a virtual environment on the donphan cluster:

                  module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

                  Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

                  jobscript.pbs
                  #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

                  Next, we submit the job script:

                  qsub jobscript.pbs\n

                  Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

                  "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

                  Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

                  For example, if we create a virtual environment on the skitty cluster,

                  module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

                  return to the login node by pressing CTRL+D and try to use the virtual environment:

                  $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

                  we are presented with the illegal instruction error. More info on this here

                  "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

                  When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

                  python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

                  Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

                  "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

                  There are two main reasons why this error could occur.

                  1. You have not loaded the Python module that was used to create the virtual environment.
                  2. You loaded or unloaded modules while the virtual environment was activated.
                  "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

                  If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

                  The following commands illustrate this issue:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

                  module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

                  You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  The solution is to only modify modules when not in a virtual environment.

                  "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

                  Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

                  One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

                  For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

                  This documentation only covers aspects of using Singularity on the infrastructure.

                  "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

                  Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

                  The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

                  In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

                  If these limitations are a problem for you, please let us know via .

                  "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

                  All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

                  "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

                  Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

                  When you make Singularity or convert Docker images you have some restrictions:

                  • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
                  "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

                  For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

                  We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

                  "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

                  Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

                  ::: prompt :::

                  Create a job script like:

                  Create an example myscript.sh:

                  ::: code bash #!/bin/bash

                  # prime factors factor 1234567 :::

                  "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

                  We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

                  Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

                  ::: prompt :::

                  You can download linear_regression.py from the official Tensorflow repository.

                  "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

                  It is also possible to execute MPI jobs within a container, but the following requirements apply:

                  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

                  • Use modules within the container (install the environment-modules or lmod package in your container)

                  • Load the required module(s) before singularity execution.

                  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

                  Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

                  ::: prompt :::

                  For example to compile an MPI example:

                  ::: prompt :::

                  Example MPI job script:

                  "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

                  The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

                  As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

                  In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

                  In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

                  • Title and nickname
                  • Start and end date for your course or training
                  • VSC-ids of all teachers/trainers
                  • Participants based on UGent Course Code and/or list of VSC-ids
                  • Optional information
                    • Additional storage requirements
                      • Shared folder
                      • Groups folder for collaboration
                      • Quota
                    • Reservation for resource requirements beyond the interactive cluster
                    • Ticket number for specific software needed for your course/training
                    • Details for a custom Interactive Application in the webportal

                  In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

                  Please make these requests well in advance, several weeks before the start of your course/workshop.

                  "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

                  The title of the course or training can be used in e.g. reporting.

                  The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

                  When choosing the nickname, try to make it unique, but this is not enforced nor checked.

                  "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

                  The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

                  The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

                  • Course group and subgroups will be deactivated
                  • Residual data in the course directories will be archived or deleted
                  • Custom Interactive Applications will be disabled
                  "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

                  A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

                  This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

                  Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

                  "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

                  The management of the list of students or participants depends if this is a UGent course or a training/workshop.

                  "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

                  Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

                  The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

                  Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

                  A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

                  "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

                  (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

                  "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

                  For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

                  This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

                  Every course directory will always contain the folders:

                  • input
                    • ideally suited to distribute input data such as common datasets
                    • moderators have read/write access
                    • group members (students) only have read access
                  • members
                    • this directory contains a personal folder for every student in your course members/vsc<01234>
                    • only this specific VSC-id will have read/write access to this folder
                    • moderators have read access to this folder
                  "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

                  Optionally, we can also create these folders:

                  • shared
                    • this is a folder for sharing files between any and all group members
                    • all group members and moderators have read/write access
                    • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
                  • groups
                    • a number of groups/group_<01> folders are created under the groups folder
                    • these folders are suitable if you want to let your students collaborate closely in smaller groups
                    • each of these group_<01> folders are owned by a dedicated group
                    • teachers are automatically made moderators of these dedicated groups
                    • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
                    • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

                  If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

                  • shared: yes
                  • subgroups: <number of (sub)groups>
                  "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

                  There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

                  • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
                  • member quota (defaults 5 GB volume and 10k files) are per student/participant

                  The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

                  "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

                  The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

                  "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

                  We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

                  Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

                  Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

                  Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

                  "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

                  In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

                  We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

                  Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

                  "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

                  HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

                  A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

                  If you would like this for your course, provide more details in your teaching request, including:

                  • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

                  • which cluster you want to use

                  • how many nodes/cores/GPUs are needed

                  • which software modules you are loading

                  • custom code you are launching (e.g. autostart a GUI)

                  • required environment variables that you are setting

                  • ...

                  We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

                  A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

                  "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

                  Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

                  "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

                  Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

                  "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

                  Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

                  "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

                  Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

                  For example:

                  $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

                  "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

                  Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

                  Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

                  The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

                  example.sh:

                  #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

                  Running the following command:

                  $ qsub --dryrun example.sh -N example\n

                  will generate this output:

                  Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

                  With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

                  Similarly to the --dryrun example, we start by running the following command:

                  $ qsub --debug example.sh -N example\n

                  which generates this output:

                  DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
                  The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

                  "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

                  Below is a list of the most common and useful directives.

                  Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

                  TORQUE-related environment variables in batch job scripts.

                  # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

                  IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

                  When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

                  Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

                  Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

                  "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

                  When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

                  To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

                  Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

                  Other reasons why using more cores may not lead to a (significant) speedup include:

                  • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

                  • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

                  • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

                  • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

                  • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

                  • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

                  More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

                  "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

                  When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

                  Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

                  Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

                  Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

                  An example of how you can make beneficial use of multiple nodes can be found here.

                  You can also use MPI in Python, some useful packages that are also available on the HPC are:

                  • mpi4py
                  • Boost.MPI

                  We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

                  "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

                  If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

                  If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

                  "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

                  If you get from your job output an error message similar to this:

                  =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

                  This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

                  "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

                  Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

                  Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

                  "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

                  If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

                  If you have errors that look like:

                  vsc40000@login.hpc.ugent.be: Permission denied\n

                  or you are experiencing problems with connecting, here is a list of things to do that should help:

                  1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

                  2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

                  3. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

                  4. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

                  5. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

                  6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

                  7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

                  If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

                  Please add -vvv as a flag to ssh like:

                  ssh -vvv vsc40000@login.hpc.ugent.be\n

                  and include the output of that command in the message.

                  "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

                  If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

                  You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

                  If the fingerprint matches, click \"Yes\".

                  If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go client, you might get one of the following fingerprints:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

                  If you get a message \"Host key for server changed\", do not click \"No\" until you verified the fingerprint.

                  If the fingerprint matches, click \"No\", and in the next pop-up screen (\"if you accept the new host key...\"), press \"Yes\".

                  If it doesn't, or you are in doubt, take a screenshot, press \"Yes\" and contact hpc@ugent.be.

                  "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

                  If you get errors like:

                  $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

                  or

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

                  It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

                  "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

                  If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

                  If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

                  The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

                  Make sure the fingerprint in the alert matches one of the following:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c
                  "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

                  To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

                  Note

                  Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

                  "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

                  If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

                  Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

                  You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

                  "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

                  See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

                  "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

                  Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

                  $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

                  This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

                  To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

                  As a rule of thumb, toolchains in the same row are compatible with each other:

                  GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

                  Example

                  we could load the following modules together:

                  ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

                  Another common error is:

                  $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

                  This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

                  "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

                  When running software provided through modules (see Modules), you may run into errors like:

                  $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

                  or errors like:

                  $ python\nIllegal instruction\n

                  When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

                  If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

                  If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

                  $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

                  This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

                  "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

                  When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

                  $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

                  When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

                  The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

                  Tip

                  To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

                  $ module swap env/slurm/donphan \n
                  instead of
                  $ module swap cluster/donphan \n

                  We recommend using a module swap cluster command after submitting the jobs.

                  This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

                  "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

                  All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

                  vsc40000@ln01[203] $\n

                  When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

                  Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

                  Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

                  $ echo This is a test\nThis is a test\n

                  Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

                  More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

                  $ ls --help \n$ man ls\n$ info ls\n

                  (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

                  "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

                  In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

                  Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

                  echo \"Hello! This is my hostname:\" \nhostname\n

                  You can type both lines at your shell prompt, and the result will be the following:

                  $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

                  Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

                  nano foo\n

                  or use the following commands:

                  echo \"echo Hello! This is my hostname:\" > foo\necho hostname >> foo\n

                  The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

                  $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  Congratulations, you just created and started your first shell script!

                  A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

                  You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

                  $ which bash\n/bin/bash\n

                  We edit our script and change it with this information:

                  #!/bin/bash echo \\\"Hello! This is my hostname:\\\" hostname\n

                  Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

                  Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

                  chmod +x foo\n

                  Now you can start your script by simply executing it:

                  $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  The same technique can be used for all other scripting languages, like Perl and Python.

                  Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

                  "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

                  The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

                  Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

                  To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

                  Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

                  Through this web portal, you can:

                  • browse through the files & directories in your VSC account, and inspect, manage or change them;

                  • consult active jobs (across all HPC-UGent Tier-2 clusters);

                  • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

                  • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

                  • open a terminal session directly in your web browser;

                  More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

                  "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

                  All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

                  "}, {"location": "web_portal/#login", "title": "Login", "text": "

                  When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

                  "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

                  The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

                  Please click \"Authorize\" here.

                  This request will only be made once, you should not see this again afterwards.

                  "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

                  Once logged in, you should see this start page:

                  This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

                  If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

                  "}, {"location": "web_portal/#features", "title": "Features", "text": "

                  We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

                  "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

                  Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

                  The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

                  Here you can:

                  • Click a directory in the tree view on the left to open it;

                  • Use the buttons on the top to:

                    • go to a specific subdirectory by typing in the path (via Go To...);

                    • open the current directory in a terminal (shell) session (via Open in Terminal);

                    • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

                    • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

                    • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

                    • show the owner and permissions in the file listing (via Show Owner/Mode);

                  • Double-click a directory in the file listing to open that directory;

                  • Select one or more files and/or directories in the file listing, and:

                    • use the View button to see the contents (use the button at the top right to close the resulting popup window);

                    • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

                    • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

                    • use the Download button to download the selected files and directories from your VSC account to your local workstation;

                    • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

                    • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

                    • use the Delete button to (permanently!) remove the selected files and directories;

                  For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

                  "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

                  Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

                  For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

                  "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

                  To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

                  A new browser tab will be opened that shows all your current queued and/or running jobs:

                  You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

                  Jobs that are still queued or running can be deleted using the red button on the right.

                  Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

                  For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

                  "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

                  To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

                  This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

                  You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

                  Don't forget to actually submit your job to the system via the green Submit button!

                  "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

                  In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

                  "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

                  Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

                  Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

                  To exit the shell session, type exit followed by Enter and then close the browser tab.

                  Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

                  "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

                  To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

                  You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

                  Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

                  To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

                  "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

                  See dedicated page on Jupyter notebooks

                  "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

                  In case of problems with the web portal, it could help to restart the web server running in your VSC account.

                  You can do this via the Restart Web Server button under the Help menu item:

                  Of course, this only affects your own web portal session (not those of others).

                  "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
                  • ABAQUS for CAE course
                  "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

                  X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

                  1. A graphical remote desktop that works well over low bandwidth connections.

                  2. Copy/paste support from client to server and vice-versa.

                  3. File sharing from client to server.

                  4. Support for sound.

                  5. Printer sharing from client to server.

                  6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

                  "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

                  X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

                  X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

                  "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

                  After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

                  There are two ways to connect to the login node:

                  • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

                  • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

                  "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

                  This is the easier way to setup X2Go, a direct connection to the login node.

                  1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

                  2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

                  3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

                  4. Set the SSH port (22 by default).

                  5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

                    1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

                    2. You should look for your private SSH key generated in Generating a public/private key pair. This file has been stored in the directory \"~/.ssh/\" (by default \"id_rsa\"). \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing cmd+shift+g , which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"/.ssh\" and press enter. Choose that file and click on open .

                  6. Check \"Try autologin\" option.

                  7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

                    1. [optional]: Set a single application like Terminal instead of XFCE desktop.

                  8. [optional]: Change the session icon.

                  9. Click the OK button after these changes.

                  "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

                  This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

                  1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

                  2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

                  3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

                    1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

                    2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

                    3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

                    4. Click the OK button after these changes.

                  "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

                  Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

                  X2Go will keep the session open for you (but only if the login node is not rebooted).

                  "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

                  If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

                  hostname\n

                  This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

                  "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

                  If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

                  "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

                  The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

                  To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

                  Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

                  After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

                  Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

                  "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

                  TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

                  Loads MNIST datasets and trains a neural network to recognize hand-written digits.

                  Runtime: ~1 min. on 8 cores (Intel Skylake)

                  See https://www.tensorflow.org/tutorials/quickstart/beginner

                  "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

                  Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

                  These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

                  The guide aims to make you familiar with the Linux command line environment quickly.

                  The tutorial goes through the following steps:

                  1. Getting Started
                  2. Navigating
                  3. Manipulating files and directories
                  4. Uploading files
                  5. Beyond the basics

                  Do not forget Common pitfalls, as this can save you some troubleshooting.

                  "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
                  • More on the HPC infrastructure.
                  • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
                  "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

                  Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

                  To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

                  First, it's important to make a distinction between two different output channels:

                  1. stdout: standard output channel, for regular output

                  2. stderr: standard error channel, for errors and warnings

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

                  > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

                  $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

                  >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

                  $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

                  < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

                  One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

                  $ find . -name .txt > files\n$ xargs grep banana < files\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

                  To redirect the stderr output (warnings, messages), you can use 2>, just like >

                  $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

                  To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

                  $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

                  Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

                  $ ls | wc -l\n    42\n

                  A common pattern is to pipe the output of a command to less so you can examine or search the output:

                  $ find . | less\n

                  Or to look through your command history:

                  $ history | less\n

                  You can put multiple pipes in the same line. For example, which cp commands have we run?

                  $ history | grep cp | less\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

                  The shell will expand certain things, including:

                  1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

                  2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

                  3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

                  4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

                  ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

                  $ ps -fu $USER\n

                  To see all the processes:

                  $ ps -elf\n

                  To see all the processes in a forest view, use:

                  $ ps auxf\n

                  The last two will spit out a lot of data, so get in the habit of piping it to less.

                  pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

                  pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

                  ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

                  $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

                  Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

                  $ kill -9 1234\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

                  top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

                  To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

                  There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

                  To exit top, use q (for 'quit').

                  For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

                  ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

                  $ ulimit -a\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

                  To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

                  $ wc example.txt\n      90     468     3189   example.txt\n

                  The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

                  To only count the number of lines, use wc -l:

                  $ wc -l example.txt\n      90    example.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

                  grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

                  $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

                  grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

                  cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

                  $ cut -f 1 -d ',' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

                  sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

                  $ sed 's/oldtext/newtext/g' myfile.txt\n

                  By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

                  "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

                  awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

                  First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

                  $ awk '{print $4}' mydata.dat\n

                  You can use -F ':' to change the delimiter (F for field separator).

                  The next example is used to sum numbers from a field:

                  $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

                  The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

                  However, there are some rules you need to abide by.

                  Here is a very detailed guide should you need more information.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

                  The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

                  #!/bin/sh\n
                  #!/bin/bash\n
                  #!/usr/bin/env bash\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

                  Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

                  if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
                  Or only if a certain variable is bigger than one:
                  if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
                  Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

                  In the initial example, we used -d to test if a directory existed. There are several more checks.

                  Another useful example, is to test if a variable contains a value (so it's not empty):

                  if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

                  the -z will check if the length of the variable's value is greater than zero.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

                  Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

                  Let's look at a simple example:

                  for i in 1 2 3\ndo\necho $i\ndone\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

                  Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

                  CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

                  In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

                  Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

                  Firstly a useful thing to know for debugging and testing is that you can run any command like this:

                  command 2>&1 output.log   # one single output file, both output and errors\n

                  If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

                  If you want regular and error output separated you can use:

                  command > output.log 2> output.err  # errors in a separate file\n

                  this will write regular output to output.log and error output to output.err.

                  You can then look for the errors with less or search for specific text with grep.

                  In scripts, you can use:

                  set -e\n

                  This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

                  Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

                  command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

                  If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

                  Examples include:

                  • modifying your $PS1 (to tweak your shell prompt)

                  • printing information about the current/jobs environment (echoing environment variables, etc.)

                  • selecting a specific cluster to run on with module swap cluster/...

                  Some recommendations:

                  • Avoid using module load statements in your $HOME/.bashrc file

                  • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

                  When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
                  #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
                  "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

                  The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

                  This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

                  #PBS -l nodes=1:ppn=1 # single-core\n

                  For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

                  #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

                  We intend to submit it on the long queue:

                  #PBS -q long\n

                  We request a total running time of 48 hours (2 days).

                  #PBS -l walltime=48:00:00\n

                  We specify a desired name of our job:

                  #PBS -N FreeSurfer_per_subject-time-longitudinal\n
                  This specifies mail options:
                  #PBS -m abe\n

                  1. a means mail is sent when the job is aborted.

                  2. b means mail is sent when the job begins.

                  3. e means mail is sent when the job ends.

                  Joins error output with regular output:

                  #PBS -j oe\n

                  All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
                  1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

                  2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

                  3. How many files and directories are in /tmp?

                  4. What's the name of the 5th file/directory in alphabetical order in /tmp?

                  5. List all files that start with t in /tmp.

                  6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

                  7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

                  "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

                  This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

                  "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

                  If you receive an error message which contains something like the following:

                  No such file or directory\n

                  It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

                  Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

                  "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

                  Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

                  $ cat some file\nNo such file or directory 'some'\n

                  Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

                  $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

                  This is especially error-prone if you are piping results of find:

                  $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

                  This can be worked around using the -print0 flag:

                  $ find . -type f -print0 | xargs -0 cat\n...\n

                  But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

                  "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

                  If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

                  $ rm -r ~/$PROJETC/*\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

                  A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

                  $ #rm -r ~/$POROJETC/*\n
                  Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

                  "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
                  $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

                  Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

                  $ chmod +x script_name.sh\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

                  If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

                  If you need help about a certain command, you should consult its so-called \"man page\":

                  $ man command\n

                  This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

                  Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

                  "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
                  1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

                  2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

                  3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

                  4. basic shell usage

                  5. Bash for beginners

                  6. MOOC

                  Please don't hesitate to contact in case of questions or problems.

                  "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

                  To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

                  You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

                  Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

                  "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

                  To get help:

                  1. use the documentation available on the system, through the help, info and man commands (use q to exit).
                    help cd \ninfo ls \nman cp \n
                  2. use Google

                  3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

                  "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

                  Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

                  "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

                  The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

                  You use the shell by executing commands, and hitting <enter>. For example:

                  $ echo hello \nhello \n

                  You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

                  To go through previous commands, use <up> and <down>, rather than retyping them.

                  "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

                  A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

                  $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

                  "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

                  If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

                  "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

                  At the prompt we also have access to shell variables, which have both a name and a value.

                  They can be thought of as placeholders for things we need to remember.

                  For example, to print the path to your home directory, we can use the shell variable named HOME:

                  $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

                  This prints the value of this variable.

                  "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

                  There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

                  For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

                  $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

                  You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

                  $ env | sort | grep VSC\n

                  But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

                  $ export MYVARIABLE=\"value\"\n

                  It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

                  If we then do

                  $ echo $MYVARIABLE\n

                  this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

                  "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

                  You can change what your prompt looks like by redefining the special-purpose variable $PS1.

                  For example: to include the current location in your prompt:

                  $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

                  Note that ~ is short representation of your home directory.

                  To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

                  $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

                  "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

                  One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

                  This may lead to surprising results, for example:

                  $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

                  To understand what's going on here, see the section on cd below.

                  The moral here is: be very careful to not use empty variables unintentionally.

                  Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

                  The -e option will result in the script getting stopped if any command fails.

                  The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

                  More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

                  "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

                  If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

                  "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

                  Basic information about the system you are logged into can be obtained in a variety of ways.

                  We limit ourselves to determining the hostname:

                  $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

                  And querying some basic information about the Linux kernel:

                  $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

                  "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
                  • Print the full path to your home directory
                  • Determine the name of the environment variable to your personal scratch directory
                  • What's the name of the system you\\'re logged into? Is it the same for everyone?
                  • Figure out how to print the value of a variable without including a newline
                  • How do you get help on using the man command?

                  Next chapter teaches you on how to navigate.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

                  Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

                  If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

                  Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

                  To figure out where your quota is being spent, the du (isk sage) command can come in useful:

                  $ du -sh test\n59M test\n

                  Do not (frequently) run du on directories where large amounts of data are stored, since that will:

                  1. take a long time

                  2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

                  Software is provided through so-called environment modules.

                  The most commonly used commands are:

                  1. module avail: show all available modules

                  2. module avail <software name>: show available modules for a specific software name

                  3. module list: show list of loaded modules

                  4. module load <module name>: load a particular module

                  More information is available in section Modules.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

                  The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

                  Detailed information is available in section submitting your job.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

                  Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

                  Hint: python -c \"print(sum(range(1, 101)))\"

                  • How many modules are available for Python version 3.6.4?
                  • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
                  • Which cluster modules are available?

                  • What's the full path to your personal home/data/scratch directories?

                  • Determine how large your personal directories are.
                  • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

                  Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

                  To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

                  $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

                  To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
                  $ cp source target\n

                  This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

                  $ cp -r sourceDirectory target\n

                  A last more complicated example:

                  $ cp -a sourceDirectory target\n

                  Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
                  $ mkdir directory\n

                  which will create a directory with the given name inside the current directory.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
                  $ mv source target\n

                  mv will move the source path to the destination path. Works for both directories as files.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

                  Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

                  $ rm filename\n
                  rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

                  You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

                  $ rmdir directory\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

                  Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

                  1. User - a particular user (account)

                  2. Group - a particular group of users (may be user-specific group with only one member)

                  3. Other - other users in the system

                  The permission types are:

                  1. Read - For files, this gives permission to read the contents of a file

                  2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

                  3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

                  Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

                  $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

                  The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

                  Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

                  $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

                  You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

                  You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

                  However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

                  $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  This will give the user otheruser permissions to write to Project_GoldenDragon

                  Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

                  Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

                  See https://linux.die.net/man/1/setfacl for more information.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

                  Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

                  $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

                  Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

                  $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

                  Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

                  $ unzip myfile.zip\n

                  If we would like to make our own zip archive, we use zip:

                  $ zip myfiles.zip myfile1 myfile2 myfile3\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

                  Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

                  You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

                  $ tar -xf tarfile.tar\n

                  Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

                  $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

                  Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

                  # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

                  If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

                  $ tar -c source1 source2 source3 -f tarfile.tar\n
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
                  1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

                  2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

                  3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

                  4. Remove the another/test directory with a single command.

                  5. Rename test to test2. Move test2/hostname.txt to your home directory.

                  6. Change the permission of test2 so only you can access it.

                  7. Create an empty job script named job.sh, and make it executable.

                  8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

                  The next chapter is on uploading files, especially important when using HPC-infrastructure.

                  "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

                  This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

                  "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

                  To print the current directory, use pwd or \\$PWD:

                  $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

                  "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

                  A very basic and commonly used command is ls, which can be used to list files and directories.

                  In its basic usage, it just prints the names of files and directories in the current directory. For example:

                  $ ls\nafile.txt some_directory \n

                  When provided an argument, it can be used to list the contents of a directory:

                  $ ls some_directory \none.txt two.txt\n

                  A couple of commonly used options include:

                  • detailed listing using ls -l:

                    $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • To print the size information in human-readable form, use the -h flag:

                    $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • also listing hidden files using the -a flag:

                    $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • ordering files by the most recent change using -rt:

                    $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

                  If you try to use ls on a file that doesn't exist, you will get a clear error message:

                  $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
                  "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

                  To change to a different directory, you can use the cd command:

                  $ cd some_directory\n

                  To change back to the previous directory you were in, there's a shortcut: cd -

                  Using cd without an argument results in returning back to your home directory:

                  $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

                  "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

                  The file command can be used to inspect what type of file you're dealing with:

                  $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
                  "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

                  An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

                  Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

                  A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

                  Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

                  There are two special relative paths worth mentioning:

                  • . is a shorthand for the current directory
                  • .. is a shorthand for the parent of the current directory

                  You can also use .. when constructing relative paths, for example:

                  $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
                  "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

                  Each file and directory has particular permissions set on it, which can be queried using ls -l.

                  For example:

                  $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

                  The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

                  1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
                  2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
                  3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
                  4. the 3rd part r-- indicates that other users only have read permissions

                  The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

                  1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
                  2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

                  See also the chmod command later in this manual.

                  "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

                  find will crawl a series of directories and lists files matching given criteria.

                  For example, to look for the file named one.txt:

                  $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

                  To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

                  $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

                  A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

                  "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
                  • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
                  • When was your home directory created or last changed?
                  • Determine the name of the last changed file in /tmp.
                  • See how home directories are organised. Can you access the home directory of other users?

                  The next chapter will teach you how to interact with files and directories.

                  "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

                  To transfer files from and to the HPC, see the section about transferring files of the HPC manual

                  "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

                  After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

                  For example, you may see an error when submitting a job script that was edited on Windows:

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

                  To fix this problem, you should run the dos2unix command on the file:

                  $ dos2unix filename\n
                  "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

                  As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

                  $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
                  "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

                  Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

                  1. Open (\"Read\"): ^R

                  2. Save (\"Write Out\"): ^O

                  3. Exit: ^X

                  More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

                  "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

                  rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

                  You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

                  For example, to copy a folder with lots of CSV files:

                  $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

                  will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

                  The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

                  To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

                  To copy files to your local computer, you can also use rsync:

                  $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
                  This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

                  See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

                  "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
                  1. Download the file /etc/hostname to your local computer.

                  2. Upload a file to a subdirectory of your personal $VSC_DATA space.

                  3. Create a file named hello.txt and edit it using nano.

                  Now you have a basic understanding, see next chapter for some more in depth concepts.

                  "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

                  In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

                  This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

                  If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

                  donphan is the new debug/interactive cluster.

                  It replaces slaking, which will be retired on Monday 22 May 2023.

                  It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
                  • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
                  • ~738 GiB of RAM memory;
                  • 1.6TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/donphan\n

                  You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

                  The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

                  Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

                  "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

                  The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

                  "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

                  By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

                  The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

                  The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

                  "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

                  Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

                  This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

                  Warning

                  Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

                  There are no strong security guarantees regarding data protection when using this shared GPU!

                  "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

                  gallade is the new large-memory cluster.

                  It replaces kirlia, which will be retired on Monday 22 May 2023.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
                  • ~940 GiB of RAM memory;
                  • 1.5TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/gallade\n

                  You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

                  The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

                  As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

                  Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

                  "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

                  Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

                  It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

                  "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

                  In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

                  This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

                  If you have any questions on using shinx, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

                  shinx is a new CPU-only cluster.

                  It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

                  It is primarily for regular CPU compute use.

                  This cluster consists of 48 workernodes, each with:

                  • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
                  • ~360 GiB of RAM memory;
                  • 400GB local disk;
                  • NDR-200 InfiniBand interconnect;
                  • RHEL9 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/shinx\n

                  You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

                  "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

                  The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

                  Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

                  "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

                  The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

                  The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

                  "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

                  As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

                  Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

                  "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
                  • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
                  "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

                  As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

                  module swap cluster/.shinx\n

                  Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

                  As such, we will have an extended pilot phase in 3 stages:

                  "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
                  • Minimal cluster to test software and nodes

                    • Only 2 or 3 nodes available
                    • FDR or EDR infiniband network
                    • EL8 OS
                  • Retirement of swalot cluster (as of 01 November 2023)

                  • Racking of stage 1 nodes
                  "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
                  • 2/3 cluster size

                    • 32 nodes (with max job size of 16 nodes)
                    • EDR Infiniband
                    • EL8 OS
                  • Retirement of victini (as of 05 February 2023)

                  • Racking of last 16 nodes
                  • Installation of NDR/NDR-200 infiniband network
                  "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
                  • Full size cluster

                    • 48 nodes (no job size limit)
                    • NDR-200 Infiniband (single switch Infiniband topology)
                    • EL9 OS
                  • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

                  "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
                  • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
                  "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

                  For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

                  module swap env/software/doduo\n

                  We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

                  "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

                  This table gives an overview of all the available software on the different clusters.

                  "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABAQUS, load one of these modules using a module load command like:

                  module load ABAQUS/2023\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABINIT, load one of these modules using a module load command like:

                  module load ABINIT/9.10.3-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRA2, load one of these modules using a module load command like:

                  module load ABRA2/2.23-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRicate, load one of these modules using a module load command like:

                  module load ABRicate/0.9.9-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABySS, load one of these modules using a module load command like:

                  module load ABySS/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ACTC, load one of these modules using a module load command like:

                  module load ACTC/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ADMIXTURE, load one of these modules using a module load command like:

                  module load ADMIXTURE/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AICSImageIO, load one of these modules using a module load command like:

                  module load AICSImageIO/4.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMAPVox, load one of these modules using a module load command like:

                  module load AMAPVox/1.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMICA, load one of these modules using a module load command like:

                  module load AMICA/2024.1.19-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMOS, load one of these modules using a module load command like:

                  module load AMOS/3.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMPtk, load one of these modules using a module load command like:

                  module load AMPtk/1.5.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTLR, load one of these modules using a module load command like:

                  module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTs, load one of these modules using a module load command like:

                  module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR-util, load one of these modules using a module load command like:

                  module load APR-util/1.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR, load one of these modules using a module load command like:

                  module load APR/1.7.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ARAGORN, load one of these modules using a module load command like:

                  module load ARAGORN/1.2.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASCAT, load one of these modules using a module load command like:

                  module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASE, load one of these modules using a module load command like:

                  module load ASE/3.22.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ATK, load one of these modules using a module load command like:

                  module load ATK/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AUGUSTUS, load one of these modules using a module load command like:

                  module load AUGUSTUS/3.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Abseil, load one of these modules using a module load command like:

                  module load Abseil/20230125.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AdapterRemoval, load one of these modules using a module load command like:

                  module load AdapterRemoval/2.3.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Albumentations, load one of these modules using a module load command like:

                  module load Albumentations/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaFold, load one of these modules using a module load command like:

                  module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaPulldown, load one of these modules using a module load command like:

                  module load AlphaPulldown/0.30.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Altair-EDEM, load one of these modules using a module load command like:

                  module load Altair-EDEM/2021.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Amber, load one of these modules using a module load command like:

                  module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberMini, load one of these modules using a module load command like:

                  module load AmberMini/16.16.0-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberTools, load one of these modules using a module load command like:

                  module load AmberTools/20-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Anaconda3, load one of these modules using a module load command like:

                  module load Anaconda3/2023.03-1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Annocript, load one of these modules using a module load command like:

                  module load Annocript/2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArchR, load one of these modules using a module load command like:

                  module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Archive-Zip, load one of these modules using a module load command like:

                  module load Archive-Zip/1.68-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arlequin, load one of these modules using a module load command like:

                  module load Arlequin/3.5.2.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Armadillo, load one of these modules using a module load command like:

                  module load Armadillo/12.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arrow, load one of these modules using a module load command like:

                  module load Arrow/14.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArviZ, load one of these modules using a module load command like:

                  module load ArviZ/0.16.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Aspera-CLI, load one of these modules using a module load command like:

                  module load Aspera-CLI/3.9.6.1467.159c5b1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoDock-Vina, load one of these modules using a module load command like:

                  module load AutoDock-Vina/1.2.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoGeneS, load one of these modules using a module load command like:

                  module load AutoGeneS/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoMap, load one of these modules using a module load command like:

                  module load AutoMap/1.0-foss-2019b-20200324\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autoconf, load one of these modules using a module load command like:

                  module load Autoconf/2.71-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Automake, load one of these modules using a module load command like:

                  module load Automake/1.16.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autotools, load one of these modules using a module load command like:

                  module load Autotools/20220317-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Avogadro2, load one of these modules using a module load command like:

                  module load Avogadro2/1.97.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BAMSurgeon, load one of these modules using a module load command like:

                  module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BBMap, load one of these modules using a module load command like:

                  module load BBMap/39.01-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BCFtools, load one of these modules using a module load command like:

                  module load BCFtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BDBag, load one of these modules using a module load command like:

                  module load BDBag/1.6.3-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDOPS, load one of these modules using a module load command like:

                  module load BEDOPS/2.4.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDTools, load one of these modules using a module load command like:

                  module load BEDTools/2.31.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAST+, load one of these modules using a module load command like:

                  module load BLAST+/2.14.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAT, load one of these modules using a module load command like:

                  module load BLAT/3.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLIS, load one of these modules using a module load command like:

                  module load BLIS/0.9.0-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BRAKER, load one of these modules using a module load command like:

                  module load BRAKER/2.1.6-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSMAPz, load one of these modules using a module load command like:

                  module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSseeker2, load one of these modules using a module load command like:

                  module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUSCO, load one of these modules using a module load command like:

                  module load BUSCO/5.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUStools, load one of these modules using a module load command like:

                  module load BUStools/0.43.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BWA, load one of these modules using a module load command like:

                  module load BWA/0.7.17-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BamTools, load one of these modules using a module load command like:

                  module load BamTools/2.5.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bambi, load one of these modules using a module load command like:

                  module load Bambi/0.7.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bandage, load one of these modules using a module load command like:

                  module load Bandage/0.9.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BatMeth2, load one of these modules using a module load command like:

                  module load BatMeth2/2.1-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScEnv, load one of these modules using a module load command like:

                  module load BayeScEnv/1.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScan, load one of these modules using a module load command like:

                  module load BayeScan/2.1-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesAss3-SNPs, load one of these modules using a module load command like:

                  module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesPrism, load one of these modules using a module load command like:

                  module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bazel, load one of these modules using a module load command like:

                  module load Bazel/6.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Beast, load one of these modules using a module load command like:

                  module load Beast/2.7.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BeautifulSoup, load one of these modules using a module load command like:

                  module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BerkeleyGW, load one of these modules using a module load command like:

                  module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BiG-SCAPE, load one of these modules using a module load command like:

                  module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BigDFT, load one of these modules using a module load command like:

                  module load BigDFT/1.9.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BinSanity, load one of these modules using a module load command like:

                  module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-DB-HTS, load one of these modules using a module load command like:

                  module load Bio-DB-HTS/3.01-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-EUtilities, load one of these modules using a module load command like:

                  module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

                  module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BioPerl, load one of these modules using a module load command like:

                  module load BioPerl/1.7.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Biopython, load one of these modules using a module load command like:

                  module load Biopython/1.83-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bismark, load one of these modules using a module load command like:

                  module load Bismark/0.23.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bison, load one of these modules using a module load command like:

                  module load Bison/3.8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blender, load one of these modules using a module load command like:

                  module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Block, load one of these modules using a module load command like:

                  module load Block/1.5.3-20200525-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc, load one of these modules using a module load command like:

                  module load Blosc/1.21.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc2, load one of these modules using a module load command like:

                  module load Blosc2/2.6.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonito, load one of these modules using a module load command like:

                  module load Bonito/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonnie++, load one of these modules using a module load command like:

                  module load Bonnie++/2.00a-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.MPI, load one of these modules using a module load command like:

                  module load Boost.MPI/1.81.0-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python-NumPy, load one of these modules using a module load command like:

                  module load Boost.Python-NumPy/1.79.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python, load one of these modules using a module load command like:

                  module load Boost.Python/1.79.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost, load one of these modules using a module load command like:

                  module load Boost/1.82.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bottleneck, load one of these modules using a module load command like:

                  module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie, load one of these modules using a module load command like:

                  module load Bowtie/1.3.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie2, load one of these modules using a module load command like:

                  module load Bowtie2/2.4.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bracken, load one of these modules using a module load command like:

                  module load Bracken/2.9-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli-python, load one of these modules using a module load command like:

                  module load Brotli-python/1.0.9-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli, load one of these modules using a module load command like:

                  module load Brotli/1.1.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brunsli, load one of these modules using a module load command like:

                  module load Brunsli/0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CASPR, load one of these modules using a module load command like:

                  module load CASPR/20200730-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CCL, load one of these modules using a module load command like:

                  module load CCL/1.12.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CD-HIT, load one of these modules using a module load command like:

                  module load CD-HIT/4.8.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDAT, load one of these modules using a module load command like:

                  module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDBtools, load one of these modules using a module load command like:

                  module load CDBtools/0.99-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDO, load one of these modules using a module load command like:

                  module load CDO/2.0.5-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CENSO, load one of these modules using a module load command like:

                  module load CENSO/1.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CESM-deps, load one of these modules using a module load command like:

                  module load CESM-deps/2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFDEMcoupling, load one of these modules using a module load command like:

                  module load CFDEMcoupling/3.8.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFITSIO, load one of these modules using a module load command like:

                  module load CFITSIO/4.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGAL, load one of these modules using a module load command like:

                  module load CGAL/5.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGmapTools, load one of these modules using a module load command like:

                  module load CGmapTools/0.1.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRCexplorer2, load one of these modules using a module load command like:

                  module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRI-long, load one of these modules using a module load command like:

                  module load CIRI-long/1.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRIquant, load one of these modules using a module load command like:

                  module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CITE-seq-Count, load one of these modules using a module load command like:

                  module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLEAR, load one of these modules using a module load command like:

                  module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLHEP, load one of these modules using a module load command like:

                  module load CLHEP/2.4.6.4-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMAverse, load one of these modules using a module load command like:

                  module load CMAverse/20220112-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMSeq, load one of these modules using a module load command like:

                  module load CMSeq/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMake, load one of these modules using a module load command like:

                  module load CMake/3.27.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using COLMAP, load one of these modules using a module load command like:

                  module load COLMAP/3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CONCOCT, load one of these modules using a module load command like:

                  module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CP2K, load one of these modules using a module load command like:

                  module load CP2K/2023.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPC2, load one of these modules using a module load command like:

                  module load CPC2/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPLEX, load one of these modules using a module load command like:

                  module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPPE, load one of these modules using a module load command like:

                  module load CPPE/0.3.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CREST, load one of these modules using a module load command like:

                  module load CREST/2.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPR-DAV, load one of these modules using a module load command like:

                  module load CRISPR-DAV/2.3.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPResso2, load one of these modules using a module load command like:

                  module load CRISPResso2/2.2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRYSTAL17, load one of these modules using a module load command like:

                  module load CRYSTAL17/1.0.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CSBDeep, load one of these modules using a module load command like:

                  module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDA, load one of these modules using a module load command like:

                  module load CUDA/12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDAcore, load one of these modules using a module load command like:

                  module load CUDAcore/11.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUnit, load one of these modules using a module load command like:

                  module load CUnit/2.1-3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CVXOPT, load one of these modules using a module load command like:

                  module load CVXOPT/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Calib, load one of these modules using a module load command like:

                  module load Calib/0.3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cantera, load one of these modules using a module load command like:

                  module load Cantera/3.0.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CapnProto, load one of these modules using a module load command like:

                  module load CapnProto/1.0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cartopy, load one of these modules using a module load command like:

                  module load Cartopy/0.22.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Casanovo, load one of these modules using a module load command like:

                  module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatBoost, load one of these modules using a module load command like:

                  module load CatBoost/1.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatLearn, load one of these modules using a module load command like:

                  module load CatLearn/0.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatMAP, load one of these modules using a module load command like:

                  module load CatMAP/20220519-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Catch2, load one of these modules using a module load command like:

                  module load Catch2/2.13.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cbc, load one of these modules using a module load command like:

                  module load Cbc/2.10.11-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellBender, load one of these modules using a module load command like:

                  module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellOracle, load one of these modules using a module load command like:

                  module load CellOracle/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellProfiler, load one of these modules using a module load command like:

                  module load CellProfiler/4.2.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger-ATAC, load one of these modules using a module load command like:

                  module load CellRanger-ATAC/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger, load one of these modules using a module load command like:

                  module load CellRanger/7.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRank, load one of these modules using a module load command like:

                  module load CellRank/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellTypist, load one of these modules using a module load command like:

                  module load CellTypist/1.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cellpose, load one of these modules using a module load command like:

                  module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Centrifuge, load one of these modules using a module load command like:

                  module load Centrifuge/1.0.4-beta-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cereal, load one of these modules using a module load command like:

                  module load Cereal/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ceres-Solver, load one of these modules using a module load command like:

                  module load Ceres-Solver/2.2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cgl, load one of these modules using a module load command like:

                  module load Cgl/0.60.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CharLS, load one of these modules using a module load command like:

                  module load CharLS/2.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheMPS2, load one of these modules using a module load command like:

                  module load CheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Check, load one of these modules using a module load command like:

                  module load Check/0.15.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheckM, load one of these modules using a module load command like:

                  module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Chimera, load one of these modules using a module load command like:

                  module load Chimera/1.16-linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circlator, load one of these modules using a module load command like:

                  module load Circlator/1.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circuitscape, load one of these modules using a module load command like:

                  module load Circuitscape/5.12.3-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clair3, load one of these modules using a module load command like:

                  module load Clair3/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clang, load one of these modules using a module load command like:

                  module load Clang/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clp, load one of these modules using a module load command like:

                  module load Clp/1.17.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clustal-Omega, load one of these modules using a module load command like:

                  module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ClustalW2, load one of these modules using a module load command like:

                  module load ClustalW2/2.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CmdStanR, load one of these modules using a module load command like:

                  module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CodAn, load one of these modules using a module load command like:

                  module load CodAn/1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoinUtils, load one of these modules using a module load command like:

                  module load CoinUtils/2.11.10-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ColabFold, load one of these modules using a module load command like:

                  module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CompareM, load one of these modules using a module load command like:

                  module load CompareM/0.1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

                  module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Concorde, load one of these modules using a module load command like:

                  module load Concorde/20031219-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoordgenLibs, load one of these modules using a module load command like:

                  module load CoordgenLibs/3.0.1-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CopyKAT, load one of these modules using a module load command like:

                  module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Coreutils, load one of these modules using a module load command like:

                  module load Coreutils/8.32-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CppUnit, load one of these modules using a module load command like:

                  module load CppUnit/1.15.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CuPy, load one of these modules using a module load command like:

                  module load CuPy/8.5.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cufflinks, load one of these modules using a module load command like:

                  module load Cufflinks/20190706-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cython, load one of these modules using a module load command like:

                  module load Cython/3.0.8-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DALI, load one of these modules using a module load command like:

                  module load DALI/2.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DAS_Tool, load one of these modules using a module load command like:

                  module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB, load one of these modules using a module load command like:

                  module load DB/18.1.40-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBD-mysql, load one of these modules using a module load command like:

                  module load DBD-mysql/4.050-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBG2OLC, load one of these modules using a module load command like:

                  module load DBG2OLC/20200724-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB_File, load one of these modules using a module load command like:

                  module load DB_File/1.858-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBus, load one of these modules using a module load command like:

                  module load DBus/1.15.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DETONATE, load one of these modules using a module load command like:

                  module load DETONATE/1.11-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DFT-D3, load one of these modules using a module load command like:

                  module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIA-NN, load one of these modules using a module load command like:

                  module load DIA-NN/1.8.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIALOGUE, load one of these modules using a module load command like:

                  module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIAMOND, load one of these modules using a module load command like:

                  module load DIAMOND/2.1.8-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIANA, load one of these modules using a module load command like:

                  module load DIANA/10.5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIRAC, load one of these modules using a module load command like:

                  module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DL_POLY_Classic, load one of these modules using a module load command like:

                  module load DL_POLY_Classic/1.10-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DMCfun, load one of these modules using a module load command like:

                  module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DOLFIN, load one of these modules using a module load command like:

                  module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DRAGMAP, load one of these modules using a module load command like:

                  module load DRAGMAP/1.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DROP, load one of these modules using a module load command like:

                  module load DROP/1.1.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DUBStepR, load one of these modules using a module load command like:

                  module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dakota, load one of these modules using a module load command like:

                  module load Dakota/6.16.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dalton, load one of these modules using a module load command like:

                  module load Dalton/2020.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DeepLoc, load one of these modules using a module load command like:

                  module load DeepLoc/2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Delly, load one of these modules using a module load command like:

                  module load Delly/0.8.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DendroPy, load one of these modules using a module load command like:

                  module load DendroPy/4.6.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DensPart, load one of these modules using a module load command like:

                  module load DensPart/20220603-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Deprecated, load one of these modules using a module load command like:

                  module load Deprecated/1.2.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DiCE-ML, load one of these modules using a module load command like:

                  module load DiCE-ML/0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dice, load one of these modules using a module load command like:

                  module load Dice/20240101-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DoubletFinder, load one of these modules using a module load command like:

                  module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Doxygen, load one of these modules using a module load command like:

                  module load Doxygen/1.9.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dsuite, load one of these modules using a module load command like:

                  module load Dsuite/20210718-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DualSPHysics, load one of these modules using a module load command like:

                  module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DyMat, load one of these modules using a module load command like:

                  module load DyMat/0.7-foss-2021b-2020-12-12\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EDirect, load one of these modules using a module load command like:

                  module load EDirect/20.5.20231006-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ELPA, load one of these modules using a module load command like:

                  module load ELPA/2021.05.001-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EMBOSS, load one of these modules using a module load command like:

                  module load EMBOSS/6.6.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESM-2, load one of these modules using a module load command like:

                  module load ESM-2/2.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMF, load one of these modules using a module load command like:

                  module load ESMF/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMPy, load one of these modules using a module load command like:

                  module load ESMPy/8.0.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ETE, load one of these modules using a module load command like:

                  module load ETE/3.1.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EUKulele, load one of these modules using a module load command like:

                  module load EUKulele/2.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EasyBuild, load one of these modules using a module load command like:

                  module load EasyBuild/4.9.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Eigen, load one of these modules using a module load command like:

                  module load Eigen/3.4.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Elk, load one of these modules using a module load command like:

                  module load Elk/7.0.12-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EpiSCORE, load one of these modules using a module load command like:

                  module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

                  module load Excel-Writer-XLSX/1.09-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Exonerate, load one of these modules using a module load command like:

                  module load Exonerate/2.4.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ExtremeLy, load one of these modules using a module load command like:

                  module load ExtremeLy/2.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FALCON, load one of these modules using a module load command like:

                  module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTA, load one of these modules using a module load command like:

                  module load FASTA/36.3.8i-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTX-Toolkit, load one of these modules using a module load command like:

                  module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FDS, load one of these modules using a module load command like:

                  module load FDS/6.8.0-intel-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FEniCS, load one of these modules using a module load command like:

                  module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFAVES, load one of these modules using a module load command like:

                  module load FFAVES/2022.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFC, load one of these modules using a module load command like:

                  module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW.MPI, load one of these modules using a module load command like:

                  module load FFTW.MPI/3.3.10-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW, load one of these modules using a module load command like:

                  module load FFTW/3.3.10-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFmpeg, load one of these modules using a module load command like:

                  module load FFmpeg/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIAT, load one of these modules using a module load command like:

                  module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIGARO, load one of these modules using a module load command like:

                  module load FIGARO/1.1.2-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAC, load one of these modules using a module load command like:

                  module load FLAC/1.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAIR, load one of these modules using a module load command like:

                  module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLANN, load one of these modules using a module load command like:

                  module load FLANN/1.9.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLASH, load one of these modules using a module load command like:

                  module load FLASH/2.2.00-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLTK, load one of these modules using a module load command like:

                  module load FLTK/1.3.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLUENT, load one of these modules using a module load command like:

                  module load FLUENT/2023R1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMM3D, load one of these modules using a module load command like:

                  module load FMM3D/20211018-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMPy, load one of these modules using a module load command like:

                  module load FMPy/0.3.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FSL, load one of these modules using a module load command like:

                  module load FSL/6.0.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FabIO, load one of these modules using a module load command like:

                  module load FabIO/0.11.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Faiss, load one of these modules using a module load command like:

                  module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastANI, load one of these modules using a module load command like:

                  module load FastANI/1.34-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastME, load one of these modules using a module load command like:

                  module load FastME/2.1.6.3-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQC, load one of these modules using a module load command like:

                  module load FastQC/0.11.9-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQ_Screen, load one of these modules using a module load command like:

                  module load FastQ_Screen/0.14.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastTree, load one of these modules using a module load command like:

                  module load FastTree/2.1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastViromeExplorer, load one of these modules using a module load command like:

                  module load FastViromeExplorer/20180422-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fastaq, load one of these modules using a module load command like:

                  module load Fastaq/3.17.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiji, load one of these modules using a module load command like:

                  module load Fiji/2.9.0-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Filtlong, load one of these modules using a module load command like:

                  module load Filtlong/0.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiona, load one of these modules using a module load command like:

                  module load Fiona/1.9.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flask, load one of these modules using a module load command like:

                  module load Flask/2.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FlexiBLAS, load one of these modules using a module load command like:

                  module load FlexiBLAS/3.3.1-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flye, load one of these modules using a module load command like:

                  module load Flye/2.9.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FragGeneScan, load one of these modules using a module load command like:

                  module load FragGeneScan/1.31-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeBarcodes, load one of these modules using a module load command like:

                  module load FreeBarcodes/3.0.a5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeFEM, load one of these modules using a module load command like:

                  module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeImage, load one of these modules using a module load command like:

                  module load FreeImage/3.18.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeSurfer, load one of these modules using a module load command like:

                  module load FreeSurfer/7.3.2-centos8_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeXL, load one of these modules using a module load command like:

                  module load FreeXL/1.0.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FriBidi, load one of these modules using a module load command like:

                  module load FriBidi/1.0.12-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FuSeq, load one of these modules using a module load command like:

                  module load FuSeq/1.1.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FusionCatcher, load one of these modules using a module load command like:

                  module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GAPPadder, load one of these modules using a module load command like:

                  module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATB-Core, load one of these modules using a module load command like:

                  module load GATB-Core/1.4.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATE, load one of these modules using a module load command like:

                  module load GATE/9.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATK, load one of these modules using a module load command like:

                  module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GBprocesS, load one of these modules using a module load command like:

                  module load GBprocesS/4.0.0.post1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCC, load one of these modules using a module load command like:

                  module load GCC/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCCcore, load one of these modules using a module load command like:

                  module load GCCcore/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GConf, load one of these modules using a module load command like:

                  module load GConf/3.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDAL, load one of these modules using a module load command like:

                  module load GDAL/3.7.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDB, load one of these modules using a module load command like:

                  module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDCM, load one of these modules using a module load command like:

                  module load GDCM/3.0.21-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDGraph, load one of these modules using a module load command like:

                  module load GDGraph/1.56-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDRCopy, load one of these modules using a module load command like:

                  module load GDRCopy/2.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEGL, load one of these modules using a module load command like:

                  module load GEGL/0.4.30-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEOS, load one of these modules using a module load command like:

                  module load GEOS/3.12.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GFF3-toolkit, load one of these modules using a module load command like:

                  module load GFF3-toolkit/2.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GIMP, load one of these modules using a module load command like:

                  module load GIMP/2.10.24-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GL2PS, load one of these modules using a module load command like:

                  module load GL2PS/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLFW, load one of these modules using a module load command like:

                  module load GLFW/3.3.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLIMPSE, load one of these modules using a module load command like:

                  module load GLIMPSE/2.0.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLM, load one of these modules using a module load command like:

                  module load GLM/0.9.9.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLPK, load one of these modules using a module load command like:

                  module load GLPK/5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLib, load one of these modules using a module load command like:

                  module load GLib/2.77.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLibmm, load one of these modules using a module load command like:

                  module load GLibmm/2.66.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMAP-GSNAP, load one of these modules using a module load command like:

                  module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMP, load one of these modules using a module load command like:

                  module load GMP/6.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GOATOOLS, load one of these modules using a module load command like:

                  module load GOATOOLS/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GObject-Introspection, load one of these modules using a module load command like:

                  module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW-setups, load one of these modules using a module load command like:

                  module load GPAW-setups/0.9.20000\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW, load one of these modules using a module load command like:

                  module load GPAW/22.8.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPy, load one of these modules using a module load command like:

                  module load GPy/1.10.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyOpt, load one of these modules using a module load command like:

                  module load GPyOpt/1.2.6-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyTorch, load one of these modules using a module load command like:

                  module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASP-suite, load one of these modules using a module load command like:

                  module load GRASP-suite/2023-05-09-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASS, load one of these modules using a module load command like:

                  module load GRASS/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GROMACS, load one of these modules using a module load command like:

                  module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GSL, load one of these modules using a module load command like:

                  module load GSL/2.7-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-bad, load one of these modules using a module load command like:

                  module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-base, load one of these modules using a module load command like:

                  module load GST-plugins-base/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GStreamer, load one of these modules using a module load command like:

                  module load GStreamer/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTDB-Tk, load one of these modules using a module load command like:

                  module load GTDB-Tk/2.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK+, load one of these modules using a module load command like:

                  module load GTK+/3.24.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK2, load one of these modules using a module load command like:

                  module load GTK2/2.24.33-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK3, load one of these modules using a module load command like:

                  module load GTK3/3.24.37-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK4, load one of these modules using a module load command like:

                  module load GTK4/4.7.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTS, load one of these modules using a module load command like:

                  module load GTS/0.7.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GUSHR, load one of these modules using a module load command like:

                  module load GUSHR/2020-09-28-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GapFiller, load one of these modules using a module load command like:

                  module load GapFiller/2.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gaussian, load one of these modules using a module load command like:

                  module load Gaussian/g16_C.01-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gblocks, load one of these modules using a module load command like:

                  module load Gblocks/0.91b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gdk-Pixbuf, load one of these modules using a module load command like:

                  module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Geant4, load one of these modules using a module load command like:

                  module load Geant4/11.0.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GeneMark-ET, load one of these modules using a module load command like:

                  module load GeneMark-ET/4.71-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeThreader, load one of these modules using a module load command like:

                  module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeWorks, load one of these modules using a module load command like:

                  module load GenomeWorks/2021.02.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gerris, load one of these modules using a module load command like:

                  module load Gerris/20131206-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GetOrganelle, load one of these modules using a module load command like:

                  module load GetOrganelle/1.7.5.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GffCompare, load one of these modules using a module load command like:

                  module load GffCompare/0.12.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ghostscript, load one of these modules using a module load command like:

                  module load Ghostscript/10.01.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GimmeMotifs, load one of these modules using a module load command like:

                  module load GimmeMotifs/0.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Giotto-Suite, load one of these modules using a module load command like:

                  module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GitPython, load one of these modules using a module load command like:

                  module load GitPython/3.1.40-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlimmerHMM, load one of these modules using a module load command like:

                  module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlobalArrays, load one of these modules using a module load command like:

                  module load GlobalArrays/5.8-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GnuTLS, load one of these modules using a module load command like:

                  module load GnuTLS/3.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Go, load one of these modules using a module load command like:

                  module load Go/1.21.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gradle, load one of these modules using a module load command like:

                  module load Gradle/8.6-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap, load one of these modules using a module load command like:

                  module load GraphMap/0.5.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap2, load one of these modules using a module load command like:

                  module load GraphMap2/0.6.4-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphene, load one of these modules using a module load command like:

                  module load Graphene/1.10.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphicsMagick, load one of these modules using a module load command like:

                  module load GraphicsMagick/1.3.34-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphviz, load one of these modules using a module load command like:

                  module load Graphviz/8.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Greenlet, load one of these modules using a module load command like:

                  module load Greenlet/2.0.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GroIMP, load one of these modules using a module load command like:

                  module load GroIMP/1.5-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guile, load one of these modules using a module load command like:

                  module load Guile/3.0.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guppy, load one of these modules using a module load command like:

                  module load Guppy/6.5.7-gpu\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gurobi, load one of these modules using a module load command like:

                  module load Gurobi/11.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HAL, load one of these modules using a module load command like:

                  module load HAL/2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDBSCAN, load one of these modules using a module load command like:

                  module load HDBSCAN/0.8.29-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDDM, load one of these modules using a module load command like:

                  module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF, load one of these modules using a module load command like:

                  module load HDF/4.2.16-2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF5, load one of these modules using a module load command like:

                  module load HDF5/1.14.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HH-suite, load one of these modules using a module load command like:

                  module load HH-suite/3.3.0-gompic-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HISAT2, load one of these modules using a module load command like:

                  module load HISAT2/2.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER, load one of these modules using a module load command like:

                  module load HMMER/3.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER2, load one of these modules using a module load command like:

                  module load HMMER2/2.3.2-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HPL, load one of these modules using a module load command like:

                  module load HPL/2.3-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSeq, load one of these modules using a module load command like:

                  module load HTSeq/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSlib, load one of these modules using a module load command like:

                  module load HTSlib/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSplotter, load one of these modules using a module load command like:

                  module load HTSplotter/2.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hadoop, load one of these modules using a module load command like:

                  module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HarfBuzz, load one of these modules using a module load command like:

                  module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCExplorer, load one of these modules using a module load command like:

                  module load HiCExplorer/3.7.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCMatrix, load one of these modules using a module load command like:

                  module load HiCMatrix/17-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HighFive, load one of these modules using a module load command like:

                  module load HighFive/2.7.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Highway, load one of these modules using a module load command like:

                  module load Highway/1.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Horovod, load one of these modules using a module load command like:

                  module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HyPo, load one of these modules using a module load command like:

                  module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hybpiper, load one of these modules using a module load command like:

                  module load Hybpiper/2.1.6-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hydra, load one of these modules using a module load command like:

                  module load Hydra/1.1.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hyperopt, load one of these modules using a module load command like:

                  module load Hyperopt/0.2.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hypre, load one of these modules using a module load command like:

                  module load Hypre/2.25.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ICU, load one of these modules using a module load command like:

                  module load ICU/73.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IDBA-UD, load one of these modules using a module load command like:

                  module load IDBA-UD/1.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGMPlot, load one of these modules using a module load command like:

                  module load IGMPlot/2.4.2-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGV, load one of these modules using a module load command like:

                  module load IGV/2.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IOR, load one of these modules using a module load command like:

                  module load IOR/3.2.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IPython, load one of these modules using a module load command like:

                  module load IPython/8.14.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IQ-TREE, load one of these modules using a module load command like:

                  module load IQ-TREE/2.2.2.6-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IRkernel, load one of these modules using a module load command like:

                  module load IRkernel/1.2-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ISA-L, load one of these modules using a module load command like:

                  module load ISA-L/2.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ITK, load one of these modules using a module load command like:

                  module load ITK/5.2.1-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ImageMagick, load one of these modules using a module load command like:

                  module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Imath, load one of these modules using a module load command like:

                  module load Imath/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Inferelator, load one of these modules using a module load command like:

                  module load Inferelator/0.6.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Infernal, load one of these modules using a module load command like:

                  module load Infernal/1.1.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using InterProScan, load one of these modules using a module load command like:

                  module load InterProScan/5.62-94.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IonQuant, load one of these modules using a module load command like:

                  module load IonQuant/1.10.12-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoQuant, load one of these modules using a module load command like:

                  module load IsoQuant/3.3.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoSeq, load one of these modules using a module load command like:

                  module load IsoSeq/4.0.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JAGS, load one of these modules using a module load command like:

                  module load JAGS/4.3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JSON-GLib, load one of these modules using a module load command like:

                  module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jansson, load one of these modules using a module load command like:

                  module load Jansson/2.13.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JasPer, load one of these modules using a module load command like:

                  module load JasPer/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Java, load one of these modules using a module load command like:

                  module load Java/17.0.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jellyfish, load one of these modules using a module load command like:

                  module load Jellyfish/2.3.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JsonCpp, load one of these modules using a module load command like:

                  module load JsonCpp/1.9.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Judy, load one of these modules using a module load command like:

                  module load Judy/1.0.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Julia, load one of these modules using a module load command like:

                  module load Julia/1.9.3-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterHub, load one of these modules using a module load command like:

                  module load JupyterHub/4.0.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterLab, load one of these modules using a module load command like:

                  module load JupyterLab/4.0.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterNotebook, load one of these modules using a module load command like:

                  module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KMC, load one of these modules using a module load command like:

                  module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KaHIP, load one of these modules using a module load command like:

                  module load KaHIP/3.14-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kaleido, load one of these modules using a module load command like:

                  module load Kaleido/0.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kalign, load one of these modules using a module load command like:

                  module load Kalign/3.3.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kent_tools, load one of these modules using a module load command like:

                  module load Kent_tools/20190326-linux.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Keras, load one of these modules using a module load command like:

                  module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KerasTuner, load one of these modules using a module load command like:

                  module load KerasTuner/1.3.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken, load one of these modules using a module load command like:

                  module load Kraken/1.1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken2, load one of these modules using a module load command like:

                  module load Kraken2/2.1.2-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KrakenUniq, load one of these modules using a module load command like:

                  module load KrakenUniq/1.0.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KronaTools, load one of these modules using a module load command like:

                  module load KronaTools/2.8.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAME, load one of these modules using a module load command like:

                  module load LAME/3.100-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAMMPS, load one of these modules using a module load command like:

                  module load LAMMPS/patch_20Nov2019-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAST, load one of these modules using a module load command like:

                  module load LAST/1179-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LASTZ, load one of these modules using a module load command like:

                  module load LASTZ/1.04.22-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LDC, load one of these modules using a module load command like:

                  module load LDC/1.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LERC, load one of these modules using a module load command like:

                  module load LERC/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIANA+, load one of these modules using a module load command like:

                  module load LIANA+/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIBSVM, load one of these modules using a module load command like:

                  module load LIBSVM/3.30-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LLVM, load one of these modules using a module load command like:

                  module load LLVM/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMDB, load one of these modules using a module load command like:

                  module load LMDB/0.9.31-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMfit, load one of these modules using a module load command like:

                  module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPJmL, load one of these modules using a module load command like:

                  module load LPJmL/4.0.003-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPeg, load one of these modules using a module load command like:

                  module load LPeg/1.0.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LSD2, load one of these modules using a module load command like:

                  module load LSD2/2.4.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LUMPY, load one of these modules using a module load command like:

                  module load LUMPY/0.3.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LZO, load one of these modules using a module load command like:

                  module load LZO/2.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using L_RNA_scaffolder, load one of these modules using a module load command like:

                  module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lace, load one of these modules using a module load command like:

                  module load Lace/1.14.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LevelDB, load one of these modules using a module load command like:

                  module load LevelDB/1.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Levenshtein, load one of these modules using a module load command like:

                  module load Levenshtein/0.24.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LiBis, load one of these modules using a module load command like:

                  module load LiBis/20200428-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibLZF, load one of these modules using a module load command like:

                  module load LibLZF/3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibSoup, load one of these modules using a module load command like:

                  module load LibSoup/3.0.7-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibTIFF, load one of these modules using a module load command like:

                  module load LibTIFF/4.6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Libint, load one of these modules using a module load command like:

                  module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lighter, load one of these modules using a module load command like:

                  module load Lighter/1.1.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LittleCMS, load one of these modules using a module load command like:

                  module load LittleCMS/2.15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LncLOOM, load one of these modules using a module load command like:

                  module load LncLOOM/2.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LoRDEC, load one of these modules using a module load command like:

                  module load LoRDEC/0.9-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Longshot, load one of these modules using a module load command like:

                  module load Longshot/0.4.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LtrDetector, load one of these modules using a module load command like:

                  module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lua, load one of these modules using a module load command like:

                  module load Lua/5.4.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M1QN3, load one of these modules using a module load command like:

                  module load M1QN3/3.3-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M4, load one of these modules using a module load command like:

                  module load M4/1.4.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS2, load one of these modules using a module load command like:

                  module load MACS2/2.2.7.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS3, load one of these modules using a module load command like:

                  module load MACS3/3.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAFFT, load one of these modules using a module load command like:

                  module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAGeCK, load one of these modules using a module load command like:

                  module load MAGeCK/0.5.9.5-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MARS, load one of these modules using a module load command like:

                  module load MARS/20191101-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATIO, load one of these modules using a module load command like:

                  module load MATIO/1.5.17-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATLAB, load one of these modules using a module load command like:

                  module load MATLAB/2022b-r5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MBROLA, load one of these modules using a module load command like:

                  module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MCL, load one of these modules using a module load command like:

                  module load MCL/22.282-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDAnalysis, load one of these modules using a module load command like:

                  module load MDAnalysis/2.4.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDTraj, load one of these modules using a module load command like:

                  module load MDTraj/1.9.7-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGA, load one of these modules using a module load command like:

                  module load MEGA/11.0.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAHIT, load one of these modules using a module load command like:

                  module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAN, load one of these modules using a module load command like:

                  module load MEGAN/6.25.3-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEM, load one of these modules using a module load command like:

                  module load MEM/20191023-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEME, load one of these modules using a module load command like:

                  module load MEME/5.5.4-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MESS, load one of these modules using a module load command like:

                  module load MESS/0.1.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using METIS, load one of these modules using a module load command like:

                  module load METIS/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MIGRATE-N, load one of these modules using a module load command like:

                  module load MIGRATE-N/5.0.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MMseqs2, load one of these modules using a module load command like:

                  module load MMseqs2/14-7e284-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOABS, load one of these modules using a module load command like:

                  module load MOABS/1.3.9.6-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MONAI, load one of these modules using a module load command like:

                  module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOOSE, load one of these modules using a module load command like:

                  module load MOOSE/2022-06-10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPC, load one of these modules using a module load command like:

                  module load MPC/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPFR, load one of these modules using a module load command like:

                  module load MPFR/4.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MRtrix, load one of these modules using a module load command like:

                  module load MRtrix/3.0.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MSFragger, load one of these modules using a module load command like:

                  module load MSFragger/4.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMPS, load one of these modules using a module load command like:

                  module load MUMPS/5.6.1-foss-2023a-metis\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMmer, load one of these modules using a module load command like:

                  module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUSCLE, load one of these modules using a module load command like:

                  module load MUSCLE/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MXNet, load one of these modules using a module load command like:

                  module load MXNet/1.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaSuRCA, load one of these modules using a module load command like:

                  module load MaSuRCA/4.1.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mako, load one of these modules using a module load command like:

                  module load Mako/1.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB-connector-c, load one of these modules using a module load command like:

                  module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB, load one of these modules using a module load command like:

                  module load MariaDB/10.9.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mash, load one of these modules using a module load command like:

                  module load Mash/2.3-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Maven, load one of these modules using a module load command like:

                  module load Maven/3.6.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaxBin, load one of these modules using a module load command like:

                  module load MaxBin/2.2.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MedPy, load one of these modules using a module load command like:

                  module load MedPy/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Megalodon, load one of these modules using a module load command like:

                  module load Megalodon/2.3.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mercurial, load one of these modules using a module load command like:

                  module load Mercurial/6.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesa, load one of these modules using a module load command like:

                  module load Mesa/23.1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Meson, load one of these modules using a module load command like:

                  module load Meson/1.2.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesquite, load one of these modules using a module load command like:

                  module load Mesquite/2.3.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaBAT, load one of these modules using a module load command like:

                  module load MetaBAT/2.15-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaEuk, load one of these modules using a module load command like:

                  module load MetaEuk/6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaPhlAn, load one of these modules using a module load command like:

                  module load MetaPhlAn/4.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Metagenome-Atlas, load one of these modules using a module load command like:

                  module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MethylDackel, load one of these modules using a module load command like:

                  module load MethylDackel/0.5.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MiXCR, load one of these modules using a module load command like:

                  module load MiXCR/4.6.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MicrobeAnnotator, load one of these modules using a module load command like:

                  module load MicrobeAnnotator/2.0.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mikado, load one of these modules using a module load command like:

                  module load Mikado/2.3.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinCED, load one of these modules using a module load command like:

                  module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinPath, load one of these modules using a module load command like:

                  module load MinPath/1.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Miniconda3, load one of these modules using a module load command like:

                  module load Miniconda3/23.5.2-0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Minipolish, load one of these modules using a module load command like:

                  module load Minipolish/0.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MitoHiFi, load one of these modules using a module load command like:

                  module load MitoHiFi/3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ModelTest-NG, load one of these modules using a module load command like:

                  module load ModelTest-NG/0.1.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molden, load one of these modules using a module load command like:

                  module load Molden/6.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molekel, load one of these modules using a module load command like:

                  module load Molekel/5.4.0-Linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mono, load one of these modules using a module load command like:

                  module load Mono/6.8.0.105-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Monocle3, load one of these modules using a module load command like:

                  module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MrBayes, load one of these modules using a module load command like:

                  module load MrBayes/3.2.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MuJoCo, load one of these modules using a module load command like:

                  module load MuJoCo/2.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultiQC, load one of these modules using a module load command like:

                  module load MultiQC/1.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultilevelEstimators, load one of these modules using a module load command like:

                  module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Multiwfn, load one of these modules using a module load command like:

                  module load Multiwfn/3.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MyCC, load one of these modules using a module load command like:

                  module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Myokit, load one of these modules using a module load command like:

                  module load Myokit/1.32.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NAMD, load one of these modules using a module load command like:

                  module load NAMD/2.14-foss-2023a-mpi\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NASM, load one of these modules using a module load command like:

                  module load NASM/2.16.01-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCCL, load one of these modules using a module load command like:

                  module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCL, load one of these modules using a module load command like:

                  module load NCL/6.6.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCO, load one of these modules using a module load command like:

                  module load NCO/5.0.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NECI, load one of these modules using a module load command like:

                  module load NECI/20230620-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NEURON, load one of these modules using a module load command like:

                  module load NEURON/7.8.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGS, load one of these modules using a module load command like:

                  module load NGS/2.11.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGSpeciesID, load one of these modules using a module load command like:

                  module load NGSpeciesID/0.1.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLMpy, load one of these modules using a module load command like:

                  module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLTK, load one of these modules using a module load command like:

                  module load NLTK/3.8.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLopt, load one of these modules using a module load command like:

                  module load NLopt/2.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NOVOPlasty, load one of these modules using a module load command like:

                  module load NOVOPlasty/3.7-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSPR, load one of these modules using a module load command like:

                  module load NSPR/4.35-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSS, load one of these modules using a module load command like:

                  module load NSS/3.89.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NVHPC, load one of these modules using a module load command like:

                  module load NVHPC/21.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoCaller, load one of these modules using a module load command like:

                  module load NanoCaller/3.4.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoComp, load one of these modules using a module load command like:

                  module load NanoComp/1.13.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoFilt, load one of these modules using a module load command like:

                  module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoPlot, load one of these modules using a module load command like:

                  module load NanoPlot/1.33.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoStat, load one of these modules using a module load command like:

                  module load NanoStat/1.6.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanopolishComp, load one of these modules using a module load command like:

                  module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NetPyNE, load one of these modules using a module load command like:

                  module load NetPyNE/1.0.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NewHybrids, load one of these modules using a module load command like:

                  module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NextGenMap, load one of these modules using a module load command like:

                  module load NextGenMap/0.5.5-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nextflow, load one of these modules using a module load command like:

                  module load Nextflow/23.10.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NiBabel, load one of these modules using a module load command like:

                  module load NiBabel/4.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nim, load one of these modules using a module load command like:

                  module load Nim/1.6.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ninja, load one of these modules using a module load command like:

                  module load Ninja/1.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nipype, load one of these modules using a module load command like:

                  module load Nipype/1.8.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OBITools3, load one of these modules using a module load command like:

                  module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX-Runtime, load one of these modules using a module load command like:

                  module load ONNX-Runtime/1.16.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX, load one of these modules using a module load command like:

                  module load ONNX/1.15.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OPERA-MS, load one of these modules using a module load command like:

                  module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ORCA, load one of these modules using a module load command like:

                  module load ORCA/5.0.4-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

                  module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Oases, load one of these modules using a module load command like:

                  module load Oases/20180312-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Omnipose, load one of these modules using a module load command like:

                  module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenAI-Gym, load one of these modules using a module load command like:

                  module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBLAS, load one of these modules using a module load command like:

                  module load OpenBLAS/0.3.24-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBabel, load one of these modules using a module load command like:

                  module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCV, load one of these modules using a module load command like:

                  module load OpenCV/4.6.0-foss-2022a-contrib\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCoarrays, load one of these modules using a module load command like:

                  module load OpenCoarrays/2.8.0-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenEXR, load one of these modules using a module load command like:

                  module load OpenEXR/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM-Extend, load one of these modules using a module load command like:

                  module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM, load one of these modules using a module load command like:

                  module load OpenFOAM/v2206-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFace, load one of these modules using a module load command like:

                  module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFold, load one of these modules using a module load command like:

                  module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenForceField, load one of these modules using a module load command like:

                  module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenImageIO, load one of these modules using a module load command like:

                  module load OpenImageIO/2.0.12-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenJPEG, load one of these modules using a module load command like:

                  module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM-PLUMED, load one of these modules using a module load command like:

                  module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM, load one of these modules using a module load command like:

                  module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMMTools, load one of these modules using a module load command like:

                  module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMPI, load one of these modules using a module load command like:

                  module load OpenMPI/4.1.6-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMolcas, load one of these modules using a module load command like:

                  module load OpenMolcas/21.06-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPGM, load one of these modules using a module load command like:

                  module load OpenPGM/5.2.122-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPIV, load one of these modules using a module load command like:

                  module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSSL, load one of these modules using a module load command like:

                  module load OpenSSL/1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSees, load one of these modules using a module load command like:

                  module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide-Java, load one of these modules using a module load command like:

                  module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide, load one of these modules using a module load command like:

                  module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Optuna, load one of these modules using a module load command like:

                  module load Optuna/3.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OrthoFinder, load one of these modules using a module load command like:

                  module load OrthoFinder/2.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Osi, load one of these modules using a module load command like:

                  module load Osi/0.108.9-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PASA, load one of these modules using a module load command like:

                  module load PASA/2.5.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PBGZIP, load one of these modules using a module load command like:

                  module load PBGZIP/20160804-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE, load one of these modules using a module load command like:

                  module load PCRE/8.45-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE2, load one of these modules using a module load command like:

                  module load PCRE2/10.42-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PEAR, load one of these modules using a module load command like:

                  module load PEAR/0.9.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PETSc, load one of these modules using a module load command like:

                  module load PETSc/3.18.4-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PHYLIP, load one of these modules using a module load command like:

                  module load PHYLIP/3.697-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PICRUSt2, load one of these modules using a module load command like:

                  module load PICRUSt2/2.5.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLAMS, load one of these modules using a module load command like:

                  module load PLAMS/1.5.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLINK, load one of these modules using a module load command like:

                  module load PLINK/2.00a3.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLUMED, load one of these modules using a module load command like:

                  module load PLUMED/2.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLY, load one of these modules using a module load command like:

                  module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PMIx, load one of these modules using a module load command like:

                  module load PMIx/4.2.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POT, load one of these modules using a module load command like:

                  module load POT/0.9.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POV-Ray, load one of these modules using a module load command like:

                  module load POV-Ray/3.7.0.8-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PPanGGOLiN, load one of these modules using a module load command like:

                  module load PPanGGOLiN/1.1.136-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRANK, load one of these modules using a module load command like:

                  module load PRANK/170427-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRINSEQ, load one of these modules using a module load command like:

                  module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRISMS-PF, load one of these modules using a module load command like:

                  module load PRISMS-PF/2.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PROJ, load one of these modules using a module load command like:

                  module load PROJ/9.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pandoc, load one of these modules using a module load command like:

                  module load Pandoc/2.13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pango, load one of these modules using a module load command like:

                  module load Pango/1.50.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMETIS, load one of these modules using a module load command like:

                  module load ParMETIS/4.0.3-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMGridGen, load one of these modules using a module load command like:

                  module load ParMGridGen/1.0-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParaView, load one of these modules using a module load command like:

                  module load ParaView/5.11.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParmEd, load one of these modules using a module load command like:

                  module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Parsl, load one of these modules using a module load command like:

                  module load Parsl/2023.7.17-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PartitionFinder, load one of these modules using a module load command like:

                  module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

                  module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl, load one of these modules using a module load command like:

                  module load Perl/5.38.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Phenoflow, load one of these modules using a module load command like:

                  module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PhyloPhlAn, load one of these modules using a module load command like:

                  module load PhyloPhlAn/3.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow-SIMD, load one of these modules using a module load command like:

                  module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow, load one of these modules using a module load command like:

                  module load Pillow/10.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pilon, load one of these modules using a module load command like:

                  module load Pilon/1.23-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pint, load one of these modules using a module load command like:

                  module load Pint/0.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PnetCDF, load one of these modules using a module load command like:

                  module load PnetCDF/1.12.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Porechop, load one of these modules using a module load command like:

                  module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PostgreSQL, load one of these modules using a module load command like:

                  module load PostgreSQL/16.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Primer3, load one of these modules using a module load command like:

                  module load Primer3/2.5.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProBiS, load one of these modules using a module load command like:

                  module load ProBiS/20230403-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProtHint, load one of these modules using a module load command like:

                  module load ProtHint/2.6.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PsiCLASS, load one of these modules using a module load command like:

                  module load PsiCLASS/1.0.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PuLP, load one of these modules using a module load command like:

                  module load PuLP/2.8.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyBerny, load one of these modules using a module load command like:

                  module load PyBerny/0.6.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCairo, load one of these modules using a module load command like:

                  module load PyCairo/1.21.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCalib, load one of these modules using a module load command like:

                  module load PyCalib/20230531-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCheMPS2, load one of these modules using a module load command like:

                  module load PyCheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyFoam, load one of these modules using a module load command like:

                  module load PyFoam/2020.5-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGEOS, load one of these modules using a module load command like:

                  module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGObject, load one of these modules using a module load command like:

                  module load PyGObject/3.42.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyInstaller, load one of these modules using a module load command like:

                  module load PyInstaller/6.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyKeOps, load one of these modules using a module load command like:

                  module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC, load one of these modules using a module load command like:

                  module load PyMC/5.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC3, load one of these modules using a module load command like:

                  module load PyMC3/3.11.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMDE, load one of these modules using a module load command like:

                  module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMOL, load one of these modules using a module load command like:

                  module load PyMOL/2.5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOD, load one of these modules using a module load command like:

                  module load PyOD/0.8.7-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenCL, load one of these modules using a module load command like:

                  module load PyOpenCL/2023.1.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenGL, load one of these modules using a module load command like:

                  module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyPy, load one of these modules using a module load command like:

                  module load PyPy/7.3.12-3.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQt5, load one of these modules using a module load command like:

                  module load PyQt5/5.15.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQtGraph, load one of these modules using a module load command like:

                  module load PyQtGraph/0.13.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRETIS, load one of these modules using a module load command like:

                  module load PyRETIS/2.5.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRe, load one of these modules using a module load command like:

                  module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PySCF, load one of these modules using a module load command like:

                  module load PySCF/2.4.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyStan, load one of these modules using a module load command like:

                  module load PyStan/2.19.1.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTables, load one of these modules using a module load command like:

                  module load PyTables/3.8.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTensor, load one of these modules using a module load command like:

                  module load PyTensor/2.17.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Geometric, load one of these modules using a module load command like:

                  module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Ignite, load one of these modules using a module load command like:

                  module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Lightning, load one of these modules using a module load command like:

                  module load PyTorch-Lightning/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch, load one of these modules using a module load command like:

                  module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF, load one of these modules using a module load command like:

                  module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF3, load one of these modules using a module load command like:

                  module load PyVCF3/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWBGT, load one of these modules using a module load command like:

                  module load PyWBGT/1.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWavelets, load one of these modules using a module load command like:

                  module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyYAML, load one of these modules using a module load command like:

                  module load PyYAML/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyZMQ, load one of these modules using a module load command like:

                  module load PyZMQ/25.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PycURL, load one of these modules using a module load command like:

                  module load PycURL/7.45.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pychopper, load one of these modules using a module load command like:

                  module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pyomo, load one of these modules using a module load command like:

                  module load Pyomo/6.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pysam, load one of these modules using a module load command like:

                  module load Pysam/0.22.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python-bundle-PyPI, load one of these modules using a module load command like:

                  module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python, load one of these modules using a module load command like:

                  module load Python/3.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCA, load one of these modules using a module load command like:

                  module load QCA/2.3.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCxMS, load one of these modules using a module load command like:

                  module load QCxMS/5.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QD, load one of these modules using a module load command like:

                  module load QD/2.3.17-NVHPC-21.2-20160110\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QGIS, load one of these modules using a module load command like:

                  module load QGIS/3.28.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QIIME2, load one of these modules using a module load command like:

                  module load QIIME2/2023.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QScintilla, load one of these modules using a module load command like:

                  module load QScintilla/2.11.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QUAST, load one of these modules using a module load command like:

                  module load QUAST/5.2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qhull, load one of these modules using a module load command like:

                  module load Qhull/2020.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5, load one of these modules using a module load command like:

                  module load Qt5/5.15.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5Webkit, load one of these modules using a module load command like:

                  module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtKeychain, load one of these modules using a module load command like:

                  module load QtKeychain/0.13.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtPy, load one of these modules using a module load command like:

                  module load QtPy/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qtconsole, load one of these modules using a module load command like:

                  module load Qtconsole/5.4.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuPath, load one of these modules using a module load command like:

                  module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qualimap, load one of these modules using a module load command like:

                  module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuantumESPRESSO, load one of these modules using a module load command like:

                  module load QuantumESPRESSO/7.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuickFF, load one of these modules using a module load command like:

                  module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qwt, load one of these modules using a module load command like:

                  module load Qwt/6.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-INLA, load one of these modules using a module load command like:

                  module load R-INLA/24.01.18-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

                  module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-CRAN, load one of these modules using a module load command like:

                  module load R-bundle-CRAN/2023.12-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R, load one of these modules using a module load command like:

                  module load R/4.3.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R2jags, load one of these modules using a module load command like:

                  module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RASPA2, load one of these modules using a module load command like:

                  module load RASPA2/2.0.41-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML-NG, load one of these modules using a module load command like:

                  module load RAxML-NG/1.2.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML, load one of these modules using a module load command like:

                  module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDFlib, load one of these modules using a module load command like:

                  module load RDFlib/6.2.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDKit, load one of these modules using a module load command like:

                  module load RDKit/2022.09.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDP-Classifier, load one of these modules using a module load command like:

                  module load RDP-Classifier/2.13-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RE2, load one of these modules using a module load command like:

                  module load RE2/2023-08-01-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RLCard, load one of these modules using a module load command like:

                  module load RLCard/1.0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RMBlast, load one of these modules using a module load command like:

                  module load RMBlast/2.11.0-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RNA-Bloom, load one of these modules using a module load command like:

                  module load RNA-Bloom/2.0.1-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ROOT, load one of these modules using a module load command like:

                  module load ROOT/6.26.06-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSEM, load one of these modules using a module load command like:

                  module load RSEM/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSeQC, load one of these modules using a module load command like:

                  module load RSeQC/4.0.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RStudio-Server, load one of these modules using a module load command like:

                  module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RTG-Tools, load one of these modules using a module load command like:

                  module load RTG-Tools/3.12.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Racon, load one of these modules using a module load command like:

                  module load Racon/1.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RagTag, load one of these modules using a module load command like:

                  module load RagTag/2.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ragout, load one of these modules using a module load command like:

                  module load Ragout/2.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RapidJSON, load one of these modules using a module load command like:

                  module load RapidJSON/1.1.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Raven, load one of these modules using a module load command like:

                  module load Raven/1.8.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray-project, load one of these modules using a module load command like:

                  module load Ray-project/1.13.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray, load one of these modules using a module load command like:

                  module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ReFrame, load one of these modules using a module load command like:

                  module load ReFrame/4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Redis, load one of these modules using a module load command like:

                  module load Redis/7.0.8-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RegTools, load one of these modules using a module load command like:

                  module load RegTools/1.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RepeatMasker, load one of these modules using a module load command like:

                  module load RepeatMasker/4.1.2-p1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ResistanceGA, load one of these modules using a module load command like:

                  module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RevBayes, load one of these modules using a module load command like:

                  module load RevBayes/1.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rgurobi, load one of these modules using a module load command like:

                  module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RheoTool, load one of these modules using a module load command like:

                  module load RheoTool/5.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rmath, load one of these modules using a module load command like:

                  module load Rmath/4.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RnBeads, load one of these modules using a module load command like:

                  module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Roary, load one of these modules using a module load command like:

                  module load Roary/3.13.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ruby, load one of these modules using a module load command like:

                  module load Ruby/3.0.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rust, load one of these modules using a module load command like:

                  module load Rust/1.75.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SAMtools, load one of these modules using a module load command like:

                  module load SAMtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SBCL, load one of these modules using a module load command like:

                  module load SBCL/2.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCENIC, load one of these modules using a module load command like:

                  module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCGid, load one of these modules using a module load command like:

                  module load SCGid/0.9b0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCOTCH, load one of these modules using a module load command like:

                  module load SCOTCH/7.0.3-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCons, load one of these modules using a module load command like:

                  module load SCons/4.5.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCopeLoomR, load one of these modules using a module load command like:

                  module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDL2, load one of these modules using a module load command like:

                  module load SDL2/2.28.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDSL, load one of these modules using a module load command like:

                  module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEACells, load one of these modules using a module load command like:

                  module load SEACells/20230731-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SECAPR, load one of these modules using a module load command like:

                  module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SELFIES, load one of these modules using a module load command like:

                  module load SELFIES/2.1.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEPP, load one of these modules using a module load command like:

                  module load SEPP/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SHAP, load one of these modules using a module load command like:

                  module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO++, load one of these modules using a module load command like:

                  module load SISSO++/1.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO, load one of these modules using a module load command like:

                  module load SISSO/3.1-20220324-iimpi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SKESA, load one of these modules using a module load command like:

                  module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLATEC, load one of these modules using a module load command like:

                  module load SLATEC/4.1-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLEPc, load one of these modules using a module load command like:

                  module load SLEPc/3.18.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLiM, load one of these modules using a module load command like:

                  module load SLiM/3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMAP, load one of these modules using a module load command like:

                  module load SMAP/4.6.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMC++, load one of these modules using a module load command like:

                  module load SMC++/1.15.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMV, load one of these modules using a module load command like:

                  module load SMV/6.7.17-iccifort-2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA-python, load one of these modules using a module load command like:

                  module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA, load one of these modules using a module load command like:

                  module load SNAP-ESA/9.0.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP, load one of these modules using a module load command like:

                  module load SNAP/2.0.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

                  module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPAdes, load one of these modules using a module load command like:

                  module load SPAdes/3.15.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPM, load one of these modules using a module load command like:

                  module load SPM/12.5_r7771-MATLAB-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPOTPY, load one of these modules using a module load command like:

                  module load SPOTPY/1.5.14-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SQLite, load one of these modules using a module load command like:

                  module load SQLite/3.43.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRA-Toolkit, load one of these modules using a module load command like:

                  module load SRA-Toolkit/3.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRPRISM, load one of these modules using a module load command like:

                  module load SRPRISM/3.1.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRST2, load one of these modules using a module load command like:

                  module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSPACE_Basic, load one of these modules using a module load command like:

                  module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSW, load one of these modules using a module load command like:

                  module load SSW/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STACEY, load one of these modules using a module load command like:

                  module load STACEY/1.2.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STAR, load one of these modules using a module load command like:

                  module load STAR/2.7.11a-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STREAM, load one of these modules using a module load command like:

                  module load STREAM/5.10-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STRique, load one of these modules using a module load command like:

                  module load STRique/0.4.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUNDIALS, load one of these modules using a module load command like:

                  module load SUNDIALS/6.6.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUPPA, load one of these modules using a module load command like:

                  module load SUPPA/2.3-20231005-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SVIM, load one of these modules using a module load command like:

                  module load SVIM/2.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SWIG, load one of these modules using a module load command like:

                  module load SWIG/4.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sabre, load one of these modules using a module load command like:

                  module load Sabre/2013-09-28-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sailfish, load one of these modules using a module load command like:

                  module load Sailfish/0.10.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Salmon, load one of these modules using a module load command like:

                  module load Salmon/1.9.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sambamba, load one of these modules using a module load command like:

                  module load Sambamba/1.0.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Satsuma2, load one of these modules using a module load command like:

                  module load Satsuma2/20220304-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaFaCoS, load one of these modules using a module load command like:

                  module load ScaFaCoS/1.0.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaLAPACK, load one of these modules using a module load command like:

                  module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SciPy-bundle, load one of these modules using a module load command like:

                  module load SciPy-bundle/2023.11-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seaborn, load one of these modules using a module load command like:

                  module load Seaborn/0.13.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SemiBin, load one of these modules using a module load command like:

                  module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sentence-Transformers, load one of these modules using a module load command like:

                  module load Sentence-Transformers/2.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SentencePiece, load one of these modules using a module load command like:

                  module load SentencePiece/0.1.99-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqAn, load one of these modules using a module load command like:

                  module load SeqAn/2.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqKit, load one of these modules using a module load command like:

                  module load SeqKit/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqLib, load one of these modules using a module load command like:

                  module load SeqLib/1.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Serf, load one of these modules using a module load command like:

                  module load Serf/1.3.9-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seurat, load one of these modules using a module load command like:

                  module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratData, load one of these modules using a module load command like:

                  module load SeuratData/20210514-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratDisk, load one of these modules using a module load command like:

                  module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratWrappers, load one of these modules using a module load command like:

                  module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shapely, load one of these modules using a module load command like:

                  module load Shapely/2.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shasta, load one of these modules using a module load command like:

                  module load Shasta/0.8.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Short-Pair, load one of these modules using a module load command like:

                  module load Short-Pair/20170125-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SiNVICT, load one of these modules using a module load command like:

                  module load SiNVICT/1.0-20180817-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sibelia, load one of these modules using a module load command like:

                  module load Sibelia/3.0.7-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimNIBS, load one of these modules using a module load command like:

                  module load SimNIBS/3.2.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimPEG, load one of these modules using a module load command like:

                  module load SimPEG/0.18.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleElastix, load one of these modules using a module load command like:

                  module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleITK, load one of these modules using a module load command like:

                  module load SimpleITK/2.1.1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SlamDunk, load one of these modules using a module load command like:

                  module load SlamDunk/0.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sniffles, load one of these modules using a module load command like:

                  module load Sniffles/2.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SoX, load one of these modules using a module load command like:

                  module load SoX/14.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spark, load one of these modules using a module load command like:

                  module load Spark/3.5.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SpatialDE, load one of these modules using a module load command like:

                  module load SpatialDE/1.1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spyder, load one of these modules using a module load command like:

                  module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SqueezeMeta, load one of these modules using a module load command like:

                  module load SqueezeMeta/1.5.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Squidpy, load one of these modules using a module load command like:

                  module load Squidpy/1.2.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stacks, load one of these modules using a module load command like:

                  module load Stacks/2.53-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stata, load one of these modules using a module load command like:

                  module load Stata/15\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Statistics-R, load one of these modules using a module load command like:

                  module load Statistics-R/0.34-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using StringTie, load one of these modules using a module load command like:

                  module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure, load one of these modules using a module load command like:

                  module load Structure/2.3.4-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure_threader, load one of these modules using a module load command like:

                  module load Structure_threader/1.3.10-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuAVE-biomat, load one of these modules using a module load command like:

                  module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subread, load one of these modules using a module load command like:

                  module load Subread/2.0.3-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subversion, load one of these modules using a module load command like:

                  module load Subversion/1.14.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuiteSparse, load one of these modules using a module load command like:

                  module load SuiteSparse/7.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU, load one of these modules using a module load command like:

                  module load SuperLU/5.2.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU_DIST, load one of these modules using a module load command like:

                  module load SuperLU_DIST/8.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Szip, load one of these modules using a module load command like:

                  module load Szip/2.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TALON, load one of these modules using a module load command like:

                  module load TALON/5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TAMkin, load one of these modules using a module load command like:

                  module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TCLAP, load one of these modules using a module load command like:

                  module load TCLAP/1.2.4-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

                  module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TEtranscripts, load one of these modules using a module load command like:

                  module load TEtranscripts/2.2.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOBIAS, load one of these modules using a module load command like:

                  module load TOBIAS/0.12.12-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOPAS, load one of these modules using a module load command like:

                  module load TOPAS/3.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRF, load one of these modules using a module load command like:

                  module load TRF/4.09.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRUST4, load one of these modules using a module load command like:

                  module load TRUST4/1.0.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tcl, load one of these modules using a module load command like:

                  module load Tcl/8.6.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TensorFlow, load one of these modules using a module load command like:

                  module load TensorFlow/2.13.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Theano, load one of these modules using a module load command like:

                  module load Theano/1.1.2-intel-2021b-PyMC\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tk, load one of these modules using a module load command like:

                  module load Tk/8.6.13-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tkinter, load one of these modules using a module load command like:

                  module load Tkinter/3.11.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Togl, load one of these modules using a module load command like:

                  module load Togl/2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tombo, load one of these modules using a module load command like:

                  module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TopHat, load one of these modules using a module load command like:

                  module load TopHat/2.1.2-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TransDecoder, load one of these modules using a module load command like:

                  module load TransDecoder/5.5.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TranscriptClean, load one of these modules using a module load command like:

                  module load TranscriptClean/2.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Transformers, load one of these modules using a module load command like:

                  module load Transformers/4.30.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TreeMix, load one of these modules using a module load command like:

                  module load TreeMix/1.13-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trilinos, load one of these modules using a module load command like:

                  module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trim_Galore, load one of these modules using a module load command like:

                  module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trimmomatic, load one of these modules using a module load command like:

                  module load Trimmomatic/0.39-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trinity, load one of these modules using a module load command like:

                  module load Trinity/2.15.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Triton, load one of these modules using a module load command like:

                  module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trycycler, load one of these modules using a module load command like:

                  module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TurboVNC, load one of these modules using a module load command like:

                  module load TurboVNC/2.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCC, load one of these modules using a module load command like:

                  module load UCC/1.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCLUST, load one of these modules using a module load command like:

                  module load UCLUST/1.2.22q-i86linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX-CUDA, load one of these modules using a module load command like:

                  module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX, load one of these modules using a module load command like:

                  module load UCX/1.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UDUNITS, load one of these modules using a module load command like:

                  module load UDUNITS/2.2.28-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UFL, load one of these modules using a module load command like:

                  module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UMI-tools, load one of these modules using a module load command like:

                  module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UQTk, load one of these modules using a module load command like:

                  module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using USEARCH, load one of these modules using a module load command like:

                  module load USEARCH/11.0.667-i86linux32\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UnZip, load one of these modules using a module load command like:

                  module load UnZip/6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UniFrac, load one of these modules using a module load command like:

                  module load UniFrac/1.3.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unicycler, load one of these modules using a module load command like:

                  module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unidecode, load one of these modules using a module load command like:

                  module load Unidecode/1.3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VASP, load one of these modules using a module load command like:

                  module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VBZ-Compression, load one of these modules using a module load command like:

                  module load VBZ-Compression/1.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VCFtools, load one of these modules using a module load command like:

                  module load VCFtools/0.1.16-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VEP, load one of these modules using a module load command like:

                  module load VEP/107-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VESTA, load one of these modules using a module load command like:

                  module load VESTA/3.5.8-gtk3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMD, load one of these modules using a module load command like:

                  module load VMD/1.9.4a51-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMTK, load one of these modules using a module load command like:

                  module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSCode, load one of these modules using a module load command like:

                  module load VSCode/1.85.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSEARCH, load one of these modules using a module load command like:

                  module load VSEARCH/2.22.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTK, load one of these modules using a module load command like:

                  module load VTK/9.2.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTune, load one of these modules using a module load command like:

                  module load VTune/2019_update2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Vala, load one of these modules using a module load command like:

                  module load Vala/0.52.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Valgrind, load one of these modules using a module load command like:

                  module load Valgrind/3.20.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VarScan, load one of these modules using a module load command like:

                  module load VarScan/2.4.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Velvet, load one of these modules using a module load command like:

                  module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VirSorter2, load one of these modules using a module load command like:

                  module load VirSorter2/2.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VisPy, load one of these modules using a module load command like:

                  module load VisPy/0.12.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Voro++, load one of these modules using a module load command like:

                  module load Voro++/0.4.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WFA2, load one of these modules using a module load command like:

                  module load WFA2/2.3.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WHAM, load one of these modules using a module load command like:

                  module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WIEN2k, load one of these modules using a module load command like:

                  module load WIEN2k/21.1-intel-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WPS, load one of these modules using a module load command like:

                  module load WPS/4.1-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WRF, load one of these modules using a module load command like:

                  module load WRF/4.1.3-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wannier90, load one of these modules using a module load command like:

                  module load Wannier90/3.1.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wayland, load one of these modules using a module load command like:

                  module load Wayland/1.22.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Waylandpp, load one of these modules using a module load command like:

                  module load Waylandpp/1.0.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WebKitGTK+, load one of these modules using a module load command like:

                  module load WebKitGTK+/2.37.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WhatsHap, load one of these modules using a module load command like:

                  module load WhatsHap/1.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Winnowmap, load one of these modules using a module load command like:

                  module load Winnowmap/1.0-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WisecondorX, load one of these modules using a module load command like:

                  module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using X11, load one of these modules using a module load command like:

                  module load X11/20230603-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCFun, load one of these modules using a module load command like:

                  module load XCFun/2.1.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCrySDen, load one of these modules using a module load command like:

                  module load XCrySDen/1.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XGBoost, load one of these modules using a module load command like:

                  module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-Compile, load one of these modules using a module load command like:

                  module load XML-Compile/1.63-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-LibXML, load one of these modules using a module load command like:

                  module load XML-LibXML/2.0208-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XZ, load one of these modules using a module load command like:

                  module load XZ/5.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xerces-C++, load one of these modules using a module load command like:

                  module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XlsxWriter, load one of these modules using a module load command like:

                  module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xvfb, load one of these modules using a module load command like:

                  module load Xvfb/21.1.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YACS, load one of these modules using a module load command like:

                  module load YACS/0.1.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YANK, load one of these modules using a module load command like:

                  module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YAXT, load one of these modules using a module load command like:

                  module load YAXT/0.9.1-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yambo, load one of these modules using a module load command like:

                  module load Yambo/5.1.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yasm, load one of these modules using a module load command like:

                  module load Yasm/1.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Z3, load one of these modules using a module load command like:

                  module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zeo++, load one of these modules using a module load command like:

                  module load Zeo++/0.3-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ZeroMQ, load one of these modules using a module load command like:

                  module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zip, load one of these modules using a module load command like:

                  module load Zip/3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zopfli, load one of these modules using a module load command like:

                  module load Zopfli/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using adjustText, load one of these modules using a module load command like:

                  module load adjustText/0.7.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aiohttp, load one of these modules using a module load command like:

                  module load aiohttp/3.8.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alevin-fry, load one of these modules using a module load command like:

                  module load alevin-fry/0.4.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleCount, load one of these modules using a module load command like:

                  module load alleleCount/4.3.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleIntegrator, load one of these modules using a module load command like:

                  module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alsa-lib, load one of these modules using a module load command like:

                  module load alsa-lib/1.2.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anadama2, load one of these modules using a module load command like:

                  module load anadama2/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using angsd, load one of these modules using a module load command like:

                  module load angsd/0.940-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anndata, load one of these modules using a module load command like:

                  module load anndata/0.10.5.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ant, load one of these modules using a module load command like:

                  module load ant/1.10.12-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using antiSMASH, load one of these modules using a module load command like:

                  module load antiSMASH/6.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anvio, load one of these modules using a module load command like:

                  module load anvio/8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using any2fasta, load one of these modules using a module load command like:

                  module load any2fasta/0.4.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using apex, load one of these modules using a module load command like:

                  module load apex/20210420-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using archspec, load one of these modules using a module load command like:

                  module load archspec/0.1.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using argtable, load one of these modules using a module load command like:

                  module load argtable/2.13-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aria2, load one of these modules using a module load command like:

                  module load aria2/1.35.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arpack-ng, load one of these modules using a module load command like:

                  module load arpack-ng/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow-R, load one of these modules using a module load command like:

                  module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow, load one of these modules using a module load command like:

                  module load arrow/0.17.1-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-atk, load one of these modules using a module load command like:

                  module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-core, load one of these modules using a module load command like:

                  module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using atools, load one of these modules using a module load command like:

                  module load atools/1.5.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attr, load one of these modules using a module load command like:

                  module load attr/2.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict, load one of these modules using a module load command like:

                  module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict3, load one of these modules using a module load command like:

                  module load attrdict3/2.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using augur, load one of these modules using a module load command like:

                  module load augur/7.0.2-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using autopep8, load one of these modules using a module load command like:

                  module load autopep8/2.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using awscli, load one of these modules using a module load command like:

                  module load awscli/2.11.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using babl, load one of these modules using a module load command like:

                  module load babl/0.1.86-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bam-readcount, load one of these modules using a module load command like:

                  module load bam-readcount/0.8.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bamFilters, load one of these modules using a module load command like:

                  module load bamFilters/2022-06-30-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using barrnap, load one of these modules using a module load command like:

                  module load barrnap/0.9-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using basemap, load one of these modules using a module load command like:

                  module load basemap/1.3.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcbio-gff, load one of these modules using a module load command like:

                  module load bcbio-gff/0.7.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcgTree, load one of these modules using a module load command like:

                  module load bcgTree/1.2.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl-convert, load one of these modules using a module load command like:

                  module load bcl-convert/4.0.3-2el7.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl2fastq2, load one of these modules using a module load command like:

                  module load bcl2fastq2/2.20.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using beagle-lib, load one of these modules using a module load command like:

                  module load beagle-lib/4.0.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using binutils, load one of these modules using a module load command like:

                  module load binutils/2.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobakery-workflows, load one of these modules using a module load command like:

                  module load biobakery-workflows/3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobambam2, load one of these modules using a module load command like:

                  module load biobambam2/2.0.185-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biogeme, load one of these modules using a module load command like:

                  module load biogeme/3.2.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biom-format, load one of these modules using a module load command like:

                  module load biom-format/2.1.15-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bmtagger, load one of these modules using a module load command like:

                  module load bmtagger/3.101-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bokeh, load one of these modules using a module load command like:

                  module load bokeh/3.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using boto3, load one of these modules using a module load command like:

                  module load boto3/1.34.10-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bpp, load one of these modules using a module load command like:

                  module load bpp/4.4.0-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using btllib, load one of these modules using a module load command like:

                  module load btllib/1.7.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using build, load one of these modules using a module load command like:

                  module load build/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildenv, load one of these modules using a module load command like:

                  module load buildenv/default-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildingspy, load one of these modules using a module load command like:

                  module load buildingspy/4.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwa-meth, load one of these modules using a module load command like:

                  module load bwa-meth/0.2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwidget, load one of these modules using a module load command like:

                  module load bwidget/1.9.15-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bx-python, load one of these modules using a module load command like:

                  module load bx-python/0.10.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bzip2, load one of these modules using a module load command like:

                  module load bzip2/1.0.8-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using c-ares, load one of these modules using a module load command like:

                  module load c-ares/1.18.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cURL, load one of these modules using a module load command like:

                  module load cURL/8.3.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cairo, load one of these modules using a module load command like:

                  module load cairo/1.17.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using canu, load one of these modules using a module load command like:

                  module load canu/2.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using carputils, load one of these modules using a module load command like:

                  module load carputils/20210513-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ccache, load one of these modules using a module load command like:

                  module load ccache/4.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cctbx-base, load one of these modules using a module load command like:

                  module load cctbx-base/2023.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdbfasta, load one of these modules using a module load command like:

                  module load cdbfasta/0.99-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdo-bindings, load one of these modules using a module load command like:

                  module load cdo-bindings/1.5.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdsapi, load one of these modules using a module load command like:

                  module load cdsapi/0.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cell2location, load one of these modules using a module load command like:

                  module load cell2location/0.05-alpha-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cffi, load one of these modules using a module load command like:

                  module load cffi/1.15.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chemprop, load one of these modules using a module load command like:

                  module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chewBBACA, load one of these modules using a module load command like:

                  module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cicero, load one of these modules using a module load command like:

                  module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cimfomfa, load one of these modules using a module load command like:

                  module load cimfomfa/22.273-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-cli, load one of these modules using a module load command like:

                  module load code-cli/1.85.1-x64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-server, load one of these modules using a module load command like:

                  module load code-server/4.9.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using colossalai, load one of these modules using a module load command like:

                  module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using conan, load one of these modules using a module load command like:

                  module load conan/1.60.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using configurable-http-proxy, load one of these modules using a module load command like:

                  module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cooler, load one of these modules using a module load command like:

                  module load cooler/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using coverage, load one of these modules using a module load command like:

                  module load coverage/7.2.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cppy, load one of these modules using a module load command like:

                  module load cppy/1.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cpu_features, load one of these modules using a module load command like:

                  module load cpu_features/0.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryoDRGN, load one of these modules using a module load command like:

                  module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryptography, load one of these modules using a module load command like:

                  module load cryptography/41.0.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuDNN, load one of these modules using a module load command like:

                  module load cuDNN/8.9.2.26-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuTENSOR, load one of these modules using a module load command like:

                  module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cutadapt, load one of these modules using a module load command like:

                  module load cutadapt/4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuteSV, load one of these modules using a module load command like:

                  module load cuteSV/2.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cython-blis, load one of these modules using a module load command like:

                  module load cython-blis/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dask, load one of these modules using a module load command like:

                  module load dask/2023.12.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dbus-glib, load one of these modules using a module load command like:

                  module load dbus-glib/0.112-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dclone, load one of these modules using a module load command like:

                  module load dclone/2.3-0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deal.II, load one of these modules using a module load command like:

                  module load deal.II/9.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using decona, load one of these modules using a module load command like:

                  module load decona/0.1.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepTools, load one of these modules using a module load command like:

                  module load deepTools/3.5.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepdiff, load one of these modules using a module load command like:

                  module load deepdiff/6.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using detectron2, load one of these modules using a module load command like:

                  module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using devbio-napari, load one of these modules using a module load command like:

                  module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dicom2nifti, load one of these modules using a module load command like:

                  module load dicom2nifti/2.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dijitso, load one of these modules using a module load command like:

                  module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dill, load one of these modules using a module load command like:

                  module load dill/0.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dlib, load one of these modules using a module load command like:

                  module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-haiku, load one of these modules using a module load command like:

                  module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-tree, load one of these modules using a module load command like:

                  module load dm-tree/0.1.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dorado, load one of these modules using a module load command like:

                  module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using double-conversion, load one of these modules using a module load command like:

                  module load double-conversion/3.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using drmaa-python, load one of these modules using a module load command like:

                  module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dtcwt, load one of these modules using a module load command like:

                  module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using duplex-tools, load one of these modules using a module load command like:

                  module load duplex-tools/0.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dynesty, load one of these modules using a module load command like:

                  module load dynesty/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eSpeak-NG, load one of these modules using a module load command like:

                  module load eSpeak-NG/1.50-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ebGSEA, load one of these modules using a module load command like:

                  module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ecCodes, load one of these modules using a module load command like:

                  module load ecCodes/2.24.2-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using edlib, load one of these modules using a module load command like:

                  module load edlib/1.3.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eggnog-mapper, load one of these modules using a module load command like:

                  module load eggnog-mapper/2.1.10-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using einops, load one of these modules using a module load command like:

                  module load einops/0.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elfutils, load one of these modules using a module load command like:

                  module load elfutils/0.187-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elprep, load one of these modules using a module load command like:

                  module load elprep/5.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using enchant-2, load one of these modules using a module load command like:

                  module load enchant-2/2.3.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using epiScanpy, load one of these modules using a module load command like:

                  module load epiScanpy/0.4.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using exiv2, load one of these modules using a module load command like:

                  module load exiv2/0.27.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expat, load one of these modules using a module load command like:

                  module load expat/2.5.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expecttest, load one of these modules using a module load command like:

                  module load expecttest/0.1.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fasta-reader, load one of these modules using a module load command like:

                  module load fasta-reader/3.0.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastahack, load one of these modules using a module load command like:

                  module load fastahack/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastai, load one of these modules using a module load command like:

                  module load fastai/2.7.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastp, load one of these modules using a module load command like:

                  module load fastp/0.23.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fermi-lite, load one of these modules using a module load command like:

                  module load fermi-lite/20190320-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using festival, load one of these modules using a module load command like:

                  module load festival/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fetchMG, load one of these modules using a module load command like:

                  module load fetchMG/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ffnvcodec, load one of these modules using a module load command like:

                  module load ffnvcodec/12.0.16.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using file, load one of these modules using a module load command like:

                  module load file/5.43-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using filevercmp, load one of these modules using a module load command like:

                  module load filevercmp/20191210-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using finder, load one of these modules using a module load command like:

                  module load finder/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flair-NLP, load one of these modules using a module load command like:

                  module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers-python, load one of these modules using a module load command like:

                  module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers, load one of these modules using a module load command like:

                  module load flatbuffers/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flex, load one of these modules using a module load command like:

                  module load flex/2.6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flit, load one of these modules using a module load command like:

                  module load flit/3.9.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flowFDA, load one of these modules using a module load command like:

                  module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fmt, load one of these modules using a module load command like:

                  module load fmt/10.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fontconfig, load one of these modules using a module load command like:

                  module load fontconfig/2.14.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using foss, load one of these modules using a module load command like:

                  module load foss/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fosscuda, load one of these modules using a module load command like:

                  module load fosscuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freebayes, load one of these modules using a module load command like:

                  module load freebayes/1.3.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freeglut, load one of these modules using a module load command like:

                  module load freeglut/3.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype-py, load one of these modules using a module load command like:

                  module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype, load one of these modules using a module load command like:

                  module load freetype/2.13.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fsom, load one of these modules using a module load command like:

                  module load fsom/20151117-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using funannotate, load one of these modules using a module load command like:

                  module load funannotate/1.8.13-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2clib, load one of these modules using a module load command like:

                  module load g2clib/1.6.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2lib, load one of these modules using a module load command like:

                  module load g2lib/3.1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2log, load one of these modules using a module load command like:

                  module load g2log/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using garnett, load one of these modules using a module load command like:

                  module load garnett/0.1.20-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gawk, load one of these modules using a module load command like:

                  module load gawk/5.1.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gbasis, load one of these modules using a module load command like:

                  module load gbasis/20210904-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gc, load one of these modules using a module load command like:

                  module load gc/8.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcccuda, load one of these modules using a module load command like:

                  module load gcccuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcloud, load one of these modules using a module load command like:

                  module load gcloud/382.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcsfs, load one of these modules using a module load command like:

                  module load gcsfs/2023.12.2.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdbm, load one of these modules using a module load command like:

                  module load gdbm/1.18.1-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdc-client, load one of these modules using a module load command like:

                  module load gdc-client/1.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gengetopt, load one of these modules using a module load command like:

                  module load gengetopt/2.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genomepy, load one of these modules using a module load command like:

                  module load genomepy/0.15.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genozip, load one of these modules using a module load command like:

                  module load genozip/13.0.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gensim, load one of these modules using a module load command like:

                  module load gensim/4.2.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using geopandas, load one of these modules using a module load command like:

                  module load geopandas/0.12.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gettext, load one of these modules using a module load command like:

                  module load gettext/0.22-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gexiv2, load one of these modules using a module load command like:

                  module load gexiv2/0.12.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gfbf, load one of these modules using a module load command like:

                  module load gfbf/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffread, load one of these modules using a module load command like:

                  module load gffread/0.12.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffutils, load one of these modules using a module load command like:

                  module load gffutils/0.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gflags, load one of these modules using a module load command like:

                  module load gflags/2.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using giflib, load one of these modules using a module load command like:

                  module load giflib/5.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git-lfs, load one of these modules using a module load command like:

                  module load git-lfs/3.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git, load one of these modules using a module load command like:

                  module load git/2.42.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glew, load one of these modules using a module load command like:

                  module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glib-networking, load one of these modules using a module load command like:

                  module load glib-networking/2.72.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glibc, load one of these modules using a module load command like:

                  module load glibc/2.30-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glog, load one of these modules using a module load command like:

                  module load glog/0.6.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmpy2, load one of these modules using a module load command like:

                  module load gmpy2/2.1.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmsh, load one of these modules using a module load command like:

                  module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gnuplot, load one of these modules using a module load command like:

                  module load gnuplot/5.4.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using goalign, load one of these modules using a module load command like:

                  module load goalign/0.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gobff, load one of these modules using a module load command like:

                  module load gobff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gomkl, load one of these modules using a module load command like:

                  module load gomkl/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompi, load one of these modules using a module load command like:

                  module load gompi/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompic, load one of these modules using a module load command like:

                  module load gompic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using googletest, load one of these modules using a module load command like:

                  module load googletest/1.13.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gotree, load one of these modules using a module load command like:

                  module load gotree/0.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperf, load one of these modules using a module load command like:

                  module load gperf/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperftools, load one of these modules using a module load command like:

                  module load gperftools/2.14-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gpustat, load one of these modules using a module load command like:

                  module load gpustat/0.6.0-gcccuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphite2, load one of these modules using a module load command like:

                  module load graphite2/1.3.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphviz-python, load one of these modules using a module load command like:

                  module load graphviz-python/0.20.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using grid, load one of these modules using a module load command like:

                  module load grid/20220610-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using groff, load one of these modules using a module load command like:

                  module load groff/1.22.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gzip, load one of these modules using a module load command like:

                  module load gzip/1.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5netcdf, load one of these modules using a module load command like:

                  module load h5netcdf/1.2.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5py, load one of these modules using a module load command like:

                  module load h5py/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using harmony, load one of these modules using a module load command like:

                  module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hatchling, load one of these modules using a module load command like:

                  module load hatchling/1.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using help2man, load one of these modules using a module load command like:

                  module load help2man/1.49.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hierfstat, load one of these modules using a module load command like:

                  module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hifiasm, load one of these modules using a module load command like:

                  module load hifiasm/0.19.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hiredis, load one of these modules using a module load command like:

                  module load hiredis/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using histolab, load one of these modules using a module load command like:

                  module load histolab/0.4.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hmmlearn, load one of these modules using a module load command like:

                  module load hmmlearn/0.3.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using horton, load one of these modules using a module load command like:

                  module load horton/2.1.1-intel-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using how_are_we_stranded_here, load one of these modules using a module load command like:

                  module load how_are_we_stranded_here/1.0.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using humann, load one of these modules using a module load command like:

                  module load humann/3.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hunspell, load one of these modules using a module load command like:

                  module load hunspell/1.7.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hwloc, load one of these modules using a module load command like:

                  module load hwloc/2.9.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hyperopt, load one of these modules using a module load command like:

                  module load hyperopt/0.2.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hypothesis, load one of these modules using a module load command like:

                  module load hypothesis/6.90.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifort, load one of these modules using a module load command like:

                  module load iccifort/2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifortcuda, load one of these modules using a module load command like:

                  module load iccifortcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ichorCNA, load one of these modules using a module load command like:

                  module load ichorCNA/0.3.2-20191219-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using idemux, load one of these modules using a module load command like:

                  module load idemux/0.1.6-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igraph, load one of these modules using a module load command like:

                  module load igraph/0.10.10-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igvShiny, load one of these modules using a module load command like:

                  module load igvShiny/20240112-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iibff, load one of these modules using a module load command like:

                  module load iibff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpi, load one of these modules using a module load command like:

                  module load iimpi/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpic, load one of these modules using a module load command like:

                  module load iimpic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imagecodecs, load one of these modules using a module load command like:

                  module load imagecodecs/2022.9.26-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imageio, load one of these modules using a module load command like:

                  module load imageio/2.22.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imbalanced-learn, load one of these modules using a module load command like:

                  module load imbalanced-learn/0.10.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imgaug, load one of these modules using a module load command like:

                  module load imgaug/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl-FFTW, load one of these modules using a module load command like:

                  module load imkl-FFTW/2023.1.0-iimpi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl, load one of these modules using a module load command like:

                  module load imkl/2023.1.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using impi, load one of these modules using a module load command like:

                  module load impi/2021.9.0-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imutils, load one of these modules using a module load command like:

                  module load imutils/0.5.4-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inferCNV, load one of these modules using a module load command like:

                  module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using infercnvpy, load one of these modules using a module load command like:

                  module load infercnvpy/0.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inflection, load one of these modules using a module load command like:

                  module load inflection/1.3.5-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel-compilers, load one of these modules using a module load command like:

                  module load intel-compilers/2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel, load one of these modules using a module load command like:

                  module load intel/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intelcuda, load one of these modules using a module load command like:

                  module load intelcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree-python, load one of these modules using a module load command like:

                  module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree, load one of these modules using a module load command like:

                  module load intervaltree/0.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intltool, load one of these modules using a module load command like:

                  module load intltool/0.51.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iodata, load one of these modules using a module load command like:

                  module load iodata/1.0.0a2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iomkl, load one of these modules using a module load command like:

                  module load iomkl/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iompi, load one of these modules using a module load command like:

                  module load iompi/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using isoCirc, load one of these modules using a module load command like:

                  module load isoCirc/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jax, load one of these modules using a module load command like:

                  module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jbigkit, load one of these modules using a module load command like:

                  module load jbigkit/2.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jemalloc, load one of these modules using a module load command like:

                  module load jemalloc/5.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jobcli, load one of these modules using a module load command like:

                  module load jobcli/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using joypy, load one of these modules using a module load command like:

                  module load joypy/0.2.4-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using json-c, load one of these modules using a module load command like:

                  module load json-c/0.16-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

                  module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server-proxy, load one of these modules using a module load command like:

                  module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server, load one of these modules using a module load command like:

                  module load jupyter-server/2.7.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jxrlib, load one of these modules using a module load command like:

                  module load jxrlib/1.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kallisto, load one of these modules using a module load command like:

                  module load kallisto/0.48.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kb-python, load one of these modules using a module load command like:

                  module load kb-python/0.27.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kim-api, load one of these modules using a module load command like:

                  module load kim-api/2.3.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kineto, load one of these modules using a module load command like:

                  module load kineto/0.4.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kma, load one of these modules using a module load command like:

                  module load kma/1.2.22-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kneaddata, load one of these modules using a module load command like:

                  module load kneaddata/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using krbalancing, load one of these modules using a module load command like:

                  module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lancet, load one of these modules using a module load command like:

                  module load lancet/1.1.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lavaan, load one of these modules using a module load command like:

                  module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leafcutter, load one of these modules using a module load command like:

                  module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using legacy-job-wrappers, load one of these modules using a module load command like:

                  module load legacy-job-wrappers/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leidenalg, load one of these modules using a module load command like:

                  module load leidenalg/0.10.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lftp, load one of these modules using a module load command like:

                  module load lftp/4.9.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libBigWig, load one of these modules using a module load command like:

                  module load libBigWig/0.4.4-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libFLAME, load one of these modules using a module load command like:

                  module load libFLAME/5.2.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libGLU, load one of these modules using a module load command like:

                  module load libGLU/9.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libRmath, load one of these modules using a module load command like:

                  module load libRmath/4.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaec, load one of these modules using a module load command like:

                  module load libaec/1.0.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaio, load one of these modules using a module load command like:

                  module load libaio/0.3.113-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libarchive, load one of these modules using a module load command like:

                  module load libarchive/3.7.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libavif, load one of these modules using a module load command like:

                  module load libavif/0.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcdms, load one of these modules using a module load command like:

                  module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcerf, load one of these modules using a module load command like:

                  module load libcerf/2.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcint, load one of these modules using a module load command like:

                  module load libcint/5.5.0-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdap, load one of these modules using a module load command like:

                  module load libdap/3.20.7-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libde265, load one of these modules using a module load command like:

                  module load libde265/1.0.11-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdeflate, load one of these modules using a module load command like:

                  module load libdeflate/1.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrm, load one of these modules using a module load command like:

                  module load libdrm/2.4.115-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrs, load one of these modules using a module load command like:

                  module load libdrs/3.1.2-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libepoxy, load one of these modules using a module load command like:

                  module load libepoxy/1.5.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libev, load one of these modules using a module load command like:

                  module load libev/4.33-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libevent, load one of these modules using a module load command like:

                  module load libevent/2.1.12-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libfabric, load one of these modules using a module load command like:

                  module load libfabric/1.19.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libffi, load one of these modules using a module load command like:

                  module load libffi/3.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgcrypt, load one of these modules using a module load command like:

                  module load libgcrypt/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgd, load one of these modules using a module load command like:

                  module load libgd/2.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgeotiff, load one of these modules using a module load command like:

                  module load libgeotiff/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgit2, load one of these modules using a module load command like:

                  module load libgit2/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libglvnd, load one of these modules using a module load command like:

                  module load libglvnd/1.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpg-error, load one of these modules using a module load command like:

                  module load libgpg-error/1.42-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpuarray, load one of these modules using a module load command like:

                  module load libgpuarray/0.7.6-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libharu, load one of these modules using a module load command like:

                  module load libharu/2.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libheif, load one of these modules using a module load command like:

                  module load libheif/1.16.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libiconv, load one of these modules using a module load command like:

                  module load libiconv/1.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn, load one of these modules using a module load command like:

                  module load libidn/1.38-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn2, load one of these modules using a module load command like:

                  module load libidn2/2.3.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjpeg-turbo, load one of these modules using a module load command like:

                  module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjxl, load one of these modules using a module load command like:

                  module load libjxl/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libleidenalg, load one of these modules using a module load command like:

                  module load libleidenalg/0.11.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmad, load one of these modules using a module load command like:

                  module load libmad/0.15.1b-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmatheval, load one of these modules using a module load command like:

                  module load libmatheval/1.1.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmaus2, load one of these modules using a module load command like:

                  module load libmaus2/2.0.813-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmypaint, load one of these modules using a module load command like:

                  module load libmypaint/1.6.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libobjcryst, load one of these modules using a module load command like:

                  module load libobjcryst/2021.1.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libogg, load one of these modules using a module load command like:

                  module load libogg/1.3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libopus, load one of these modules using a module load command like:

                  module load libopus/1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpciaccess, load one of these modules using a module load command like:

                  module load libpciaccess/0.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpng, load one of these modules using a module load command like:

                  module load libpng/1.6.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpsl, load one of these modules using a module load command like:

                  module load libpsl/0.21.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libreadline, load one of these modules using a module load command like:

                  module load libreadline/8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librosa, load one of these modules using a module load command like:

                  module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librsvg, load one of these modules using a module load command like:

                  module load librsvg/2.51.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librttopo, load one of these modules using a module load command like:

                  module load librttopo/1.1.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsigc++, load one of these modules using a module load command like:

                  module load libsigc++/2.10.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsndfile, load one of these modules using a module load command like:

                  module load libsndfile/1.2.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsodium, load one of these modules using a module load command like:

                  module load libsodium/1.0.18-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialindex, load one of these modules using a module load command like:

                  module load libspatialindex/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialite, load one of these modules using a module load command like:

                  module load libspatialite/5.0.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtasn1, load one of these modules using a module load command like:

                  module load libtasn1/4.18.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtirpc, load one of these modules using a module load command like:

                  module load libtirpc/1.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtool, load one of these modules using a module load command like:

                  module load libtool/2.4.7-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunistring, load one of these modules using a module load command like:

                  module load libunistring/1.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunwind, load one of these modules using a module load command like:

                  module load libunwind/1.6.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvdwxc, load one of these modules using a module load command like:

                  module load libvdwxc/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvorbis, load one of these modules using a module load command like:

                  module load libvorbis/1.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvori, load one of these modules using a module load command like:

                  module load libvori/220621-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwebp, load one of these modules using a module load command like:

                  module load libwebp/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwpe, load one of these modules using a module load command like:

                  module load libwpe/1.13.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxc, load one of these modules using a module load command like:

                  module load libxc/6.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml++, load one of these modules using a module load command like:

                  module load libxml++/2.42.1-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml2, load one of these modules using a module load command like:

                  module load libxml2/2.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxslt, load one of these modules using a module load command like:

                  module load libxslt/1.1.38-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxsmm, load one of these modules using a module load command like:

                  module load libxsmm/1.17-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libyaml, load one of these modules using a module load command like:

                  module load libyaml/0.2.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libzip, load one of these modules using a module load command like:

                  module load libzip/1.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lifelines, load one of these modules using a module load command like:

                  module load lifelines/0.27.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using likwid, load one of these modules using a module load command like:

                  module load likwid/5.0.1-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lmoments3, load one of these modules using a module load command like:

                  module load lmoments3/1.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using longread_umi, load one of these modules using a module load command like:

                  module load longread_umi/0.3.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loomR, load one of these modules using a module load command like:

                  module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loompy, load one of these modules using a module load command like:

                  module load loompy/3.0.7-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using louvain, load one of these modules using a module load command like:

                  module load louvain/0.8.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lpsolve, load one of these modules using a module load command like:

                  module load lpsolve/5.5.2.11-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lxml, load one of these modules using a module load command like:

                  module load lxml/4.9.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lz4, load one of these modules using a module load command like:

                  module load lz4/1.9.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maeparser, load one of these modules using a module load command like:

                  module load maeparser/1.3.0-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using magma, load one of these modules using a module load command like:

                  module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mahotas, load one of these modules using a module load command like:

                  module load mahotas/1.4.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using make, load one of these modules using a module load command like:

                  module load make/4.4.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makedepend, load one of these modules using a module load command like:

                  module load makedepend/1.0.6-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makeinfo, load one of these modules using a module load command like:

                  module load makeinfo/7.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using manta, load one of these modules using a module load command like:

                  module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mapDamage, load one of these modules using a module load command like:

                  module load mapDamage/2.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using matplotlib, load one of these modules using a module load command like:

                  module load matplotlib/3.7.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maturin, load one of these modules using a module load command like:

                  module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mauveAligner, load one of these modules using a module load command like:

                  module load mauveAligner/4736-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maze, load one of these modules using a module load command like:

                  module load maze/20170124-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mcu, load one of these modules using a module load command like:

                  module load mcu/2021-04-06-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medImgProc, load one of these modules using a module load command like:

                  module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medaka, load one of these modules using a module load command like:

                  module load medaka/1.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshalyzer, load one of these modules using a module load command like:

                  module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshtool, load one of these modules using a module load command like:

                  module load meshtool/16-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meson-python, load one of these modules using a module load command like:

                  module load meson-python/0.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaWRAP, load one of these modules using a module load command like:

                  module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaerg, load one of these modules using a module load command like:

                  module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using methylpy, load one of these modules using a module load command like:

                  module load methylpy/1.2.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgen, load one of these modules using a module load command like:

                  module load mgen/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgltools, load one of these modules using a module load command like:

                  module load mgltools/1.5.7\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mhcnuggets, load one of these modules using a module load command like:

                  module load mhcnuggets/2.3-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using microctools, load one of these modules using a module load command like:

                  module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minibar, load one of these modules using a module load command like:

                  module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minimap2, load one of these modules using a module load command like:

                  module load minimap2/2.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minizip, load one of these modules using a module load command like:

                  module load minizip/1.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using misha, load one of these modules using a module load command like:

                  module load misha/4.0.10-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mkl-service, load one of these modules using a module load command like:

                  module load mkl-service/2.3.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mm-common, load one of these modules using a module load command like:

                  module load mm-common/1.0.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using molmod, load one of these modules using a module load command like:

                  module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mongolite, load one of these modules using a module load command like:

                  module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using monitor, load one of these modules using a module load command like:

                  module load monitor/1.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mosdepth, load one of these modules using a module load command like:

                  module load mosdepth/0.3.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using motionSegmentation, load one of these modules using a module load command like:

                  module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpath, load one of these modules using a module load command like:

                  module load mpath/1.1.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpi4py, load one of these modules using a module load command like:

                  module load mpi4py/3.1.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mrcfile, load one of these modules using a module load command like:

                  module load mrcfile/1.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using muParser, load one of these modules using a module load command like:

                  module load muParser/2.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mujoco-py, load one of these modules using a module load command like:

                  module load mujoco-py/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using multichoose, load one of these modules using a module load command like:

                  module load multichoose/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mygene, load one of these modules using a module load command like:

                  module load mygene/3.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mysqlclient, load one of these modules using a module load command like:

                  module load mysqlclient/2.1.1-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using n2v, load one of these modules using a module load command like:

                  module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanocompore, load one of these modules using a module load command like:

                  module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanofilt, load one of these modules using a module load command like:

                  module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanoget, load one of these modules using a module load command like:

                  module load nanoget/1.18.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanomath, load one of these modules using a module load command like:

                  module load nanomath/1.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanopolish, load one of these modules using a module load command like:

                  module load nanopolish/0.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using napari, load one of these modules using a module load command like:

                  module load napari/0.4.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncbi-vdb, load one of these modules using a module load command like:

                  module load ncbi-vdb/3.0.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncdf4, load one of these modules using a module load command like:

                  module load ncdf4/1.17-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncolor, load one of these modules using a module load command like:

                  module load ncolor/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncurses, load one of these modules using a module load command like:

                  module load ncurses/6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncview, load one of these modules using a module load command like:

                  module load ncview/2.1.7-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-C++4, load one of these modules using a module load command like:

                  module load netCDF-C++4/4.3.1-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-Fortran, load one of these modules using a module load command like:

                  module load netCDF-Fortran/4.6.0-iimpi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF, load one of these modules using a module load command like:

                  module load netCDF/4.9.2-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netcdf4-python, load one of these modules using a module load command like:

                  module load netcdf4-python/1.6.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nettle, load one of these modules using a module load command like:

                  module load nettle/3.9.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using networkx, load one of these modules using a module load command like:

                  module load networkx/3.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp2, load one of these modules using a module load command like:

                  module load nghttp2/1.48.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp3, load one of these modules using a module load command like:

                  module load nghttp3/0.6.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nglview, load one of these modules using a module load command like:

                  module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ngtcp2, load one of these modules using a module load command like:

                  module load ngtcp2/0.7.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nichenetr, load one of these modules using a module load command like:

                  module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nlohmann_json, load one of these modules using a module load command like:

                  module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nnU-Net, load one of these modules using a module load command like:

                  module load nnU-Net/1.7.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nodejs, load one of these modules using a module load command like:

                  module load nodejs/18.17.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using noise, load one of these modules using a module load command like:

                  module load noise/1.2.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nsync, load one of these modules using a module load command like:

                  module load nsync/1.26.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ntCard, load one of these modules using a module load command like:

                  module load ntCard/1.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using num2words, load one of these modules using a module load command like:

                  module load num2words/0.5.10-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numactl, load one of these modules using a module load command like:

                  module load numactl/2.0.16-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numba, load one of these modules using a module load command like:

                  module load numba/0.58.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numexpr, load one of these modules using a module load command like:

                  module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nvtop, load one of these modules using a module load command like:

                  module load nvtop/1.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olaFlow, load one of these modules using a module load command like:

                  module load olaFlow/20210820-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olego, load one of these modules using a module load command like:

                  module load olego/1.1.9-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using onedrive, load one of these modules using a module load command like:

                  module load onedrive/2.4.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ont-fast5-api, load one of these modules using a module load command like:

                  module load ont-fast5-api/4.1.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openCARP, load one of these modules using a module load command like:

                  module load openCARP/6.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openkim-models, load one of these modules using a module load command like:

                  module load openkim-models/20190725-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openpyxl, load one of these modules using a module load command like:

                  module load openpyxl/3.1.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openslide-python, load one of these modules using a module load command like:

                  module load openslide-python/1.2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using orca, load one of these modules using a module load command like:

                  module load orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p11-kit, load one of these modules using a module load command like:

                  module load p11-kit/0.24.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p4est, load one of these modules using a module load command like:

                  module load p4est/2.8-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p7zip, load one of these modules using a module load command like:

                  module load p7zip/17.03-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pIRS, load one of these modules using a module load command like:

                  module load pIRS/2.0.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using packmol, load one of these modules using a module load command like:

                  module load packmol/v20.2.2-iccifort-2020.1.217\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pagmo, load one of these modules using a module load command like:

                  module load pagmo/2.17.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pairtools, load one of these modules using a module load command like:

                  module load pairtools/0.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using panaroo, load one of these modules using a module load command like:

                  module load panaroo/1.2.8-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pandas, load one of these modules using a module load command like:

                  module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel-fastq-dump, load one of these modules using a module load command like:

                  module load parallel-fastq-dump/0.6.7-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel, load one of these modules using a module load command like:

                  module load parallel/20230722-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parasail, load one of these modules using a module load command like:

                  module load parasail/2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using patchelf, load one of these modules using a module load command like:

                  module load patchelf/0.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pauvre, load one of these modules using a module load command like:

                  module load pauvre/0.1924-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pblat, load one of these modules using a module load command like:

                  module load pblat/2.5.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pdsh, load one of these modules using a module load command like:

                  module load pdsh/2.34-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using peakdetect, load one of these modules using a module load command like:

                  module load peakdetect/1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using petsc4py, load one of these modules using a module load command like:

                  module load petsc4py/3.17.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pftoolsV3, load one of these modules using a module load command like:

                  module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonemizer, load one of these modules using a module load command like:

                  module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonopy, load one of these modules using a module load command like:

                  module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phototonic, load one of these modules using a module load command like:

                  module load phototonic/2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phyluce, load one of these modules using a module load command like:

                  module load phyluce/1.7.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using picard, load one of these modules using a module load command like:

                  module load picard/2.25.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pigz, load one of these modules using a module load command like:

                  module load pigz/2.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pixman, load one of these modules using a module load command like:

                  module load pixman/0.42.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkg-config, load one of these modules using a module load command like:

                  module load pkg-config/0.29.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconf, load one of these modules using a module load command like:

                  module load pkgconf/2.0.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconfig, load one of these modules using a module load command like:

                  module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plot1cell, load one of these modules using a module load command like:

                  module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly-orca, load one of these modules using a module load command like:

                  module load plotly-orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly.py, load one of these modules using a module load command like:

                  module load plotly.py/5.16.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pocl, load one of these modules using a module load command like:

                  module load pocl/4.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pod5-file-format, load one of these modules using a module load command like:

                  module load pod5-file-format/0.1.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poetry, load one of these modules using a module load command like:

                  module load poetry/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using polars, load one of these modules using a module load command like:

                  module load polars/0.15.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poppler, load one of these modules using a module load command like:

                  module load poppler/23.09.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using popscle, load one of these modules using a module load command like:

                  module load popscle/0.1-beta-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using porefoam, load one of these modules using a module load command like:

                  module load porefoam/2021-09-21-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using powerlaw, load one of these modules using a module load command like:

                  module load powerlaw/1.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pplacer, load one of these modules using a module load command like:

                  module load pplacer/1.1.alpha19\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using preseq, load one of these modules using a module load command like:

                  module load preseq/3.2.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using presto, load one of these modules using a module load command like:

                  module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pretty-yaml, load one of these modules using a module load command like:

                  module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prodigal, load one of these modules using a module load command like:

                  module load prodigal/2.6.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prokka, load one of these modules using a module load command like:

                  module load prokka/1.14.5-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf-python, load one of these modules using a module load command like:

                  module load protobuf-python/4.24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf, load one of these modules using a module load command like:

                  module load protobuf/24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psutil, load one of these modules using a module load command like:

                  module load psutil/5.9.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psycopg2, load one of these modules using a module load command like:

                  module load psycopg2/2.9.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pugixml, load one of these modules using a module load command like:

                  module load pugixml/1.12.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pullseq, load one of these modules using a module load command like:

                  module load pullseq/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using purge_dups, load one of these modules using a module load command like:

                  module load purge_dups/1.2.5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pv, load one of these modules using a module load command like:

                  module load pv/1.7.24-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py-cpuinfo, load one of these modules using a module load command like:

                  module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py3Dmol, load one of these modules using a module load command like:

                  module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyBigWig, load one of these modules using a module load command like:

                  module load pyBigWig/0.3.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyEGA3, load one of these modules using a module load command like:

                  module load pyEGA3/5.0.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyGenomeTracks, load one of these modules using a module load command like:

                  module load pyGenomeTracks/3.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pySCENIC, load one of these modules using a module load command like:

                  module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyWannier90, load one of these modules using a module load command like:

                  module load pyWannier90/2021-12-07-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybedtools, load one of these modules using a module load command like:

                  module load pybedtools/0.9.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybind11, load one of these modules using a module load command like:

                  module load pybind11/2.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycocotools, load one of these modules using a module load command like:

                  module load pycocotools/2.0.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycodestyle, load one of these modules using a module load command like:

                  module load pycodestyle/2.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydantic, load one of these modules using a module load command like:

                  module load pydantic/2.5.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydicom, load one of these modules using a module load command like:

                  module load pydicom/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydot, load one of these modules using a module load command like:

                  module load pydot/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfaidx, load one of these modules using a module load command like:

                  module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfasta, load one of these modules using a module load command like:

                  module load pyfasta/0.5.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygmo, load one of these modules using a module load command like:

                  module load pygmo/2.16.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygraphviz, load one of these modules using a module load command like:

                  module load pygraphviz/1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyiron, load one of these modules using a module load command like:

                  module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymatgen, load one of these modules using a module load command like:

                  module load pymatgen/2022.9.21-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymbar, load one of these modules using a module load command like:

                  module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymca, load one of these modules using a module load command like:

                  module load pymca/5.6.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyobjcryst, load one of these modules using a module load command like:

                  module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyodbc, load one of these modules using a module load command like:

                  module load pyodbc/4.0.39-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyparsing, load one of these modules using a module load command like:

                  module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyproj, load one of these modules using a module load command like:

                  module load pyproj/3.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-api, load one of these modules using a module load command like:

                  module load pyro-api/0.1.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-ppl, load one of these modules using a module load command like:

                  module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysamstats, load one of these modules using a module load command like:

                  module load pysamstats/1.1.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysndfx, load one of these modules using a module load command like:

                  module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyspoa, load one of these modules using a module load command like:

                  module load pyspoa/0.0.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-flakefinder, load one of these modules using a module load command like:

                  module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-rerunfailures, load one of these modules using a module load command like:

                  module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-shard, load one of these modules using a module load command like:

                  module load pytest-shard/0.1.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-xdist, load one of these modules using a module load command like:

                  module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest, load one of these modules using a module load command like:

                  module load pytest/7.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythermalcomfort, load one of these modules using a module load command like:

                  module load pythermalcomfort/2.8.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-Levenshtein, load one of these modules using a module load command like:

                  module load python-Levenshtein/0.12.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-igraph, load one of these modules using a module load command like:

                  module load python-igraph/0.11.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-irodsclient, load one of these modules using a module load command like:

                  module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-isal, load one of these modules using a module load command like:

                  module load python-isal/1.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-louvain, load one of these modules using a module load command like:

                  module load python-louvain/0.16-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-parasail, load one of these modules using a module load command like:

                  module load python-parasail/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-telegram-bot, load one of these modules using a module load command like:

                  module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-weka-wrapper3, load one of these modules using a module load command like:

                  module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythran, load one of these modules using a module load command like:

                  module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qcat, load one of these modules using a module load command like:

                  module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qnorm, load one of these modules using a module load command like:

                  module load qnorm/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rMATS-turbo, load one of these modules using a module load command like:

                  module load rMATS-turbo/4.1.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using radian, load one of these modules using a module load command like:

                  module load radian/0.6.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterio, load one of these modules using a module load command like:

                  module load rasterio/1.3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterstats, load one of these modules using a module load command like:

                  module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rclone, load one of these modules using a module load command like:

                  module load rclone/1.65.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using re2c, load one of these modules using a module load command like:

                  module load re2c/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using redis-py, load one of these modules using a module load command like:

                  module load redis-py/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using regionmask, load one of these modules using a module load command like:

                  module load regionmask/0.10.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using request, load one of these modules using a module load command like:

                  module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rethinking, load one of these modules using a module load command like:

                  module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgdal, load one of these modules using a module load command like:

                  module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgeos, load one of these modules using a module load command like:

                  module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rickflow, load one of these modules using a module load command like:

                  module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rioxarray, load one of these modules using a module load command like:

                  module load rioxarray/0.11.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rjags, load one of these modules using a module load command like:

                  module load rjags/4-13-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rmarkdown, load one of these modules using a module load command like:

                  module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rpy2, load one of these modules using a module load command like:

                  module load rpy2/3.5.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstanarm, load one of these modules using a module load command like:

                  module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstudio, load one of these modules using a module load command like:

                  module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruamel.yaml, load one of these modules using a module load command like:

                  module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruffus, load one of these modules using a module load command like:

                  module load ruffus/2.8.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using s3fs, load one of these modules using a module load command like:

                  module load s3fs/2023.12.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samblaster, load one of these modules using a module load command like:

                  module load samblaster/0.1.26-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samclip, load one of these modules using a module load command like:

                  module load samclip/0.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sansa, load one of these modules using a module load command like:

                  module load sansa/0.0.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sbt, load one of these modules using a module load command like:

                  module load sbt/1.3.13-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scArches, load one of these modules using a module load command like:

                  module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scCODA, load one of these modules using a module load command like:

                  module load scCODA/0.1.9-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scGeneFit, load one of these modules using a module load command like:

                  module load scGeneFit/1.0.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scHiCExplorer, load one of these modules using a module load command like:

                  module load scHiCExplorer/7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scPred, load one of these modules using a module load command like:

                  module load scPred/1.9.2-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scVelo, load one of these modules using a module load command like:

                  module load scVelo/0.2.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scanpy, load one of these modules using a module load command like:

                  module load scanpy/1.9.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sceasy, load one of these modules using a module load command like:

                  module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib-metrics, load one of these modules using a module load command like:

                  module load scib-metrics/0.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib, load one of these modules using a module load command like:

                  module load scib/1.1.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-bio, load one of these modules using a module load command like:

                  module load scikit-bio/0.5.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-build, load one of these modules using a module load command like:

                  module load scikit-build/0.17.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-extremes, load one of these modules using a module load command like:

                  module load scikit-extremes/2022.4.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-image, load one of these modules using a module load command like:

                  module load scikit-image/0.19.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-learn, load one of these modules using a module load command like:

                  module load scikit-learn/1.4.0-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-misc, load one of these modules using a module load command like:

                  module load scikit-misc/0.1.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-optimize, load one of these modules using a module load command like:

                  module load scikit-optimize/0.9.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scipy, load one of these modules using a module load command like:

                  module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scrublet, load one of these modules using a module load command like:

                  module load scrublet/0.2.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scvi-tools, load one of these modules using a module load command like:

                  module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segemehl, load one of these modules using a module load command like:

                  module load segemehl/0.3.4-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segmentation-models, load one of these modules using a module load command like:

                  module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using semla, load one of these modules using a module load command like:

                  module load semla/1.1.6-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using seqtk, load one of these modules using a module load command like:

                  module load seqtk/1.4-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools-rust, load one of these modules using a module load command like:

                  module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools, load one of these modules using a module load command like:

                  module load setuptools/64.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sf, load one of these modules using a module load command like:

                  module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using shovill, load one of these modules using a module load command like:

                  module load shovill/1.1.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silhouetteRank, load one of these modules using a module load command like:

                  module load silhouetteRank/1.0.5.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silx, load one of these modules using a module load command like:

                  module load silx/0.14.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slepc4py, load one of these modules using a module load command like:

                  module load slepc4py/3.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slow5tools, load one of these modules using a module load command like:

                  module load slow5tools/0.4.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slurm-drmaa, load one of these modules using a module load command like:

                  module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smfishHmrf, load one of these modules using a module load command like:

                  module load smfishHmrf/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smithwaterman, load one of these modules using a module load command like:

                  module load smithwaterman/20160702-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smooth-topk, load one of these modules using a module load command like:

                  module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snakemake, load one of these modules using a module load command like:

                  module load snakemake/8.4.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snappy, load one of these modules using a module load command like:

                  module load snappy/1.1.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snippy, load one of these modules using a module load command like:

                  module load snippy/4.6.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snp-sites, load one of these modules using a module load command like:

                  module load snp-sites/2.5.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snpEff, load one of these modules using a module load command like:

                  module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using solo, load one of these modules using a module load command like:

                  module load solo/1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sonic, load one of these modules using a module load command like:

                  module load sonic/20180202-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaCy, load one of these modules using a module load command like:

                  module load spaCy/3.4.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaln, load one of these modules using a module load command like:

                  module load spaln/2.4.13f-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparse-neighbors-search, load one of these modules using a module load command like:

                  module load sparse-neighbors-search/0.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparsehash, load one of these modules using a module load command like:

                  module load sparsehash/2.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spatialreg, load one of these modules using a module load command like:

                  module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using speech_tools, load one of these modules using a module load command like:

                  module load speech_tools/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spglib-python, load one of these modules using a module load command like:

                  module load spglib-python/2.0.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spoa, load one of these modules using a module load command like:

                  module load spoa/4.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stardist, load one of these modules using a module load command like:

                  module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stars, load one of these modules using a module load command like:

                  module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using statsmodels, load one of these modules using a module load command like:

                  module load statsmodels/0.14.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using suave, load one of these modules using a module load command like:

                  module load suave/20160529-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using supernova, load one of these modules using a module load command like:

                  module load supernova/2.0.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using swissknife, load one of these modules using a module load command like:

                  module load swissknife/1.80-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sympy, load one of these modules using a module load command like:

                  module load sympy/1.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synapseclient, load one of these modules using a module load command like:

                  module load synapseclient/3.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synthcity, load one of these modules using a module load command like:

                  module load synthcity/0.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tMAE, load one of these modules using a module load command like:

                  module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tabixpp, load one of these modules using a module load command like:

                  module load tabixpp/1.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using task-spooler, load one of these modules using a module load command like:

                  module load task-spooler/1.0.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using taxator-tk, load one of these modules using a module load command like:

                  module load taxator-tk/1.3.3-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbb, load one of these modules using a module load command like:

                  module load tbb/2021.5.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbl2asn, load one of these modules using a module load command like:

                  module load tbl2asn/20220427-linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tcsh, load one of these modules using a module load command like:

                  module load tcsh/6.24.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboard, load one of these modules using a module load command like:

                  module load tensorboard/2.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboardX, load one of these modules using a module load command like:

                  module load tensorboardX/2.6.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorflow-probability, load one of these modules using a module load command like:

                  module load tensorflow-probability/0.19.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texinfo, load one of these modules using a module load command like:

                  module load texinfo/6.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texlive, load one of these modules using a module load command like:

                  module load texlive/20230313-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tidymodels, load one of these modules using a module load command like:

                  module load tidymodels/1.1.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using time, load one of these modules using a module load command like:

                  module load time/1.9-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using timm, load one of these modules using a module load command like:

                  module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tmux, load one of these modules using a module load command like:

                  module load tmux/3.2a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tokenizers, load one of these modules using a module load command like:

                  module load tokenizers/0.13.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchaudio, load one of these modules using a module load command like:

                  module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchtext, load one of these modules using a module load command like:

                  module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvf, load one of these modules using a module load command like:

                  module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvision, load one of these modules using a module load command like:

                  module load torchvision/0.14.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tornado, load one of these modules using a module load command like:

                  module load tornado/6.3.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tqdm, load one of these modules using a module load command like:

                  module load tqdm/4.66.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using treatSens, load one of these modules using a module load command like:

                  module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using trimAl, load one of these modules using a module load command like:

                  module load trimAl/1.4.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tsne, load one of these modules using a module load command like:

                  module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using typing-extensions, load one of these modules using a module load command like:

                  module load typing-extensions/4.9.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umap-learn, load one of these modules using a module load command like:

                  module load umap-learn/0.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umi4cPackage, load one of these modules using a module load command like:

                  module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainties, load one of these modules using a module load command like:

                  module load uncertainties/3.1.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainty-calibration, load one of these modules using a module load command like:

                  module load uncertainty-calibration/0.0.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unimap, load one of these modules using a module load command like:

                  module load unimap/0.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unixODBC, load one of these modules using a module load command like:

                  module load unixODBC/2.3.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using utf8proc, load one of these modules using a module load command like:

                  module load utf8proc/2.8.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using util-linux, load one of these modules using a module load command like:

                  module load util-linux/2.39-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vConTACT2, load one of these modules using a module load command like:

                  module load vConTACT2/0.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vaeda, load one of these modules using a module load command like:

                  module load vaeda/0.0.30-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vbz_compression, load one of these modules using a module load command like:

                  module load vbz_compression/1.0.1-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vcflib, load one of these modules using a module load command like:

                  module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using velocyto, load one of these modules using a module load command like:

                  module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using virtualenv, load one of these modules using a module load command like:

                  module load virtualenv/20.24.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vispr, load one of these modules using a module load command like:

                  module load vispr/0.4.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessce-python, load one of these modules using a module load command like:

                  module load vitessce-python/20230222-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessceR, load one of these modules using a module load command like:

                  module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vsc-mympirun, load one of these modules using a module load command like:

                  module load vsc-mympirun/5.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vt, load one of these modules using a module load command like:

                  module load vt/0.57721-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wandb, load one of these modules using a module load command like:

                  module load wandb/0.13.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using waves2Foam, load one of these modules using a module load command like:

                  module load waves2Foam/20200703-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wget, load one of these modules using a module load command like:

                  module load wget/1.21.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wgsim, load one of these modules using a module load command like:

                  module load wgsim/20111017-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using worker, load one of these modules using a module load command like:

                  module load worker/1.6.13-iimpi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wpebackend-fdo, load one of these modules using a module load command like:

                  module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrapt, load one of these modules using a module load command like:

                  module load wrapt/1.15.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrf-python, load one of these modules using a module load command like:

                  module load wrf-python/1.3.4.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wtdbg2, load one of these modules using a module load command like:

                  module load wtdbg2/2.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxPython, load one of these modules using a module load command like:

                  module load wxPython/4.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxWidgets, load one of these modules using a module load command like:

                  module load wxWidgets/3.2.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x264, load one of these modules using a module load command like:

                  module load x264/20230226-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x265, load one of these modules using a module load command like:

                  module load x265/3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xESMF, load one of these modules using a module load command like:

                  module load xESMF/0.3.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xarray, load one of these modules using a module load command like:

                  module load xarray/2023.9.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xorg-macros, load one of these modules using a module load command like:

                  module load xorg-macros/1.20.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xprop, load one of these modules using a module load command like:

                  module load xprop/1.2.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xproto, load one of these modules using a module load command like:

                  module load xproto/7.0.31-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xtb, load one of these modules using a module load command like:

                  module load xtb/6.6.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xxd, load one of these modules using a module load command like:

                  module load xxd/9.0.2112-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaff, load one of these modules using a module load command like:

                  module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaml-cpp, load one of these modules using a module load command like:

                  module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zUMIs, load one of these modules using a module load command like:

                  module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zarr, load one of these modules using a module load command like:

                  module load zarr/2.16.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zfp, load one of these modules using a module load command like:

                  module load zfp/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib-ng, load one of these modules using a module load command like:

                  module load zlib-ng/2.0.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib, load one of these modules using a module load command like:

                  module load zlib/1.2.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zstd, load one of these modules using a module load command like:

                  module load zstd/1.5.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

                  Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

                  "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
                  • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

                  • Includes researchers from all VSC partners.

                  • Usage is free of charge.

                  • Use your account credentials at your affiliated university to request a VSC-id and connect.

                  • See Getting a HPC Account.

                  "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
                  • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

                  • HPC-UGent promotes using the Tier1 services of the VSC.

                  • HPC-UGent can act as a liason.

                  "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
                  • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

                  • Same conditions apply, free of charge for all Flemish university associations.

                  • Use your university account credentials to request a VSC-id and connect.

                  "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

                  Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

                  "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
                  • VSC has a dedicated service geared towards industry.

                  • HPC-UGent can act as a liason to the VSC services.

                  "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
                  • Interested in collaborating in supercomputing with a UGent research group?

                  • We can help you look for a collaborative partner. Contact hpc@ugent.be.

                  "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
                  $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

                  $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

                  As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

                  "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

                  (more info soon)

                  "}]} \ No newline at end of file +{"config": {"lang": ["en"], "separator": "[\\_\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to the HPC-UGent documentation", "text": "

                  Use the menu on the left to navigate, or use the search box on the top right.

                  You are viewing documentation intended for people using macOS.

                  Use the OS dropdown in the top bar to switch to a different operating system.

                  Quick links

                  • Getting Started | Getting Access
                  • Recording of HPC-UGent intro
                  • Linux Tutorial
                  • Hardware overview
                  • Migration of cluster and login nodes to RHEL9 (starting Sept'24)
                  • FAQ | Troubleshooting | Best practices | Known issues

                  If you find any problems in this documentation, please report them by mail to hpc@ugent.be or open a pull request.

                  If you still have any questions, you can contact the HPC-UGent team.

                  "}, {"location": "FAQ/", "title": "Frequently Asked Questions (FAQ)", "text": "

                  New users should consult the Introduction to HPC to get started, which is a great resource for learning the basics, troubleshooting, and looking up specifics.

                  If you want to use software that's not yet installed on the HPC, send us a software installation request.

                  Overview of HPC-UGent Tier-2 infrastructure

                  "}, {"location": "FAQ/#composing-a-job", "title": "Composing a job", "text": ""}, {"location": "FAQ/#how-many-coresnodes-should-i-request", "title": "How many cores/nodes should I request?", "text": "

                  An important factor in this question is how well your task is being parallelized: does it actually run faster with more resources? You can test this yourself: start with 4 cores, then 8, then 16... The execution time should each time be reduced to around half of what it was before. You can also try this with full nodes: 1 node, 2 nodes. A rule of thumb is that you're around the limit when you double the resources but the execution time is still ~60-70% of what it was before. That's a signal to stop increasing the core count.

                  See also: Running batch jobs.

                  "}, {"location": "FAQ/#which-packages-are-available", "title": "Which packages are available?", "text": "

                  When connected to the HPC, use the commands module avail [search_text] and module spider [module] to find installed modules and get information on them.

                  Among others, many packages for both Python and R are readily available on the HPC. These aren't always easy to find, though, as we've bundled them together.

                  Specifically, the module SciPy-bundle includes numpy, pandas, scipy and a few others. For R, the normal R module has many libraries included. The bundle R-bundle-Bioconductor contains more libraries. Use the command module spider [module] to find the specifics on these bundles.

                  If the package or library you want is not available, send us a software installation request.

                  "}, {"location": "FAQ/#how-do-i-choose-the-job-modules", "title": "How do I choose the job modules?", "text": "

                  Modules each come with a suffix that describes the toolchain used to install them.

                  Examples:

                  • AlphaFold/2.2.2-foss-2021a

                  • tqdm/4.61.2-GCCcore-10.3.0

                  • Python/3.9.5-GCCcore-10.3.0

                  • matplotlib/3.4.2-foss-2021a

                  Modules from the same toolchain always work together, and modules from a *different version of the same toolchain* never work together.

                  The above set of modules works together: an overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

                  You can use module avail [search_text] to see which versions on which toolchains are available to use.

                  If you need something that's not available yet, you can request it through a software installation request.

                  It is possible to use the modules without specifying a version or toolchain. However, this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules. Even if it works now, as more modules get installed on the HPC, your job can suddenly break.

                  "}, {"location": "FAQ/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "FAQ/#my-modules-dont-work-together", "title": "My modules don't work together", "text": "

                  When incompatible modules are loaded, you might encounter an error like this:

                  Lmod has detected the following error: A different version of the 'GCC' module\nis already loaded (see output of 'ml').\n

                  You should load another foss module for that is compatible with the currently loaded version of GCC. Use ml spider foss to get an overview of the available versions.

                  Modules from the same toolchain always work together, and modules from a different version of the same toolchain never work together.

                  An overview of compatible toolchains can be found here: https://docs.easybuild.io/en/latest/Common-toolchains.html#overview-of-common-toolchains.

                  See also: How do I choose the job modules?

                  "}, {"location": "FAQ/#my-job-takes-longer-than-72-hours", "title": "My job takes longer than 72 hours", "text": "

                  The 72 hour walltime limit will not be extended. However, you can work around this barrier:

                  • Check that all available resources are being used. See also:
                    • How many cores/nodes should I request?.
                    • My job is slow.
                    • My job isn't using any GPUs.
                  • Use a faster cluster.
                  • Divide the job into more parallel processes.
                  • Divide the job into shorter processes, which you can submit as separate jobs.
                  • Use the built-in checkpointing of your software.
                  "}, {"location": "FAQ/#job-failed-segv-segmentation-fault", "title": "Job failed: SEGV Segmentation fault", "text": "

                  Any error mentioning SEGV or Segmentation fault/violation has something to do with a memory error. If you weren't messing around with memory-unsafe applications or programming, your job probably hit its memory limit.

                  When there's no memory amount specified in a job script, your job will get access to a proportional share of the total memory on the node: If you request a full node, all memory will be available. If you request 8 cores on a cluster where nodes have 2x18 cores, you will get 8/36 = 2/9 of the total memory on the node.

                  Try requesting a bit more memory than your proportional share, and see if that solves the issue.

                  See also: Specifying memory requirements.

                  "}, {"location": "FAQ/#my-compilationcommand-fails-on-login-node", "title": "My compilation/command fails on login node", "text": "

                  When logging in, you are using a connection to the login nodes. There are somewhat strict limitations on what you can do in those sessions: check out the output of ulimit -a. Specifically, the memory and the amount of processes you can use may present an issue. This is common with MATLAB compilation and Nextflow. An error caused by the login session limitations can look like this: Aborted (core dumped).

                  It's easy to get around these limitations: start an interactive session on one of the clusters. Then, you are acting as a node on that cluster instead of a login node. Notably, the debug/interactive cluster will grant such a session immediately, while other clusters might make you wait a bit. Example command: ml swap cluster/donphan && qsub -I -l nodes=1:ppn=8

                  See also: Running interactive jobs.

                  "}, {"location": "FAQ/#my-job-isnt-using-any-gpus", "title": "My job isn't using any GPUs", "text": "

                  Only two clusters have GPUs. Check out the infrastructure overview, to see which one suits your needs. Make sure that you manually switch to the GPU cluster before you submit the job. Inside the job script, you need to explicitly request the GPUs: #PBS -l nodes=1:ppn=24:gpus=2

                  Some software modules don't have GPU support, even when running on the GPU cluster. For example, when running module avail alphafold on the joltik cluster, you will find versions on both the foss toolchain and the fossCUDA toolchain. Of these, only the CUDA versions will use GPU power. When in doubt, CUDA means GPU support.

                  See also: HPC-UGent GPU clusters.

                  "}, {"location": "FAQ/#my-job-runs-slower-than-i-expected", "title": "My job runs slower than I expected", "text": "

                  There are a few possible causes why a job can perform worse than expected.

                  Is your job using all the available cores you've requested? You can test this by increasing and decreasing the core amount: If the execution time stays the same, the job was not using all cores. Some workloads just don't scale well with more cores. If you expect the job to be very parallelizable and you encounter this problem, maybe you missed some settings that enable multicore execution. See also: How many cores/nodes should i request?

                  Does your job have access to the GPUs you requested? See also: My job isn't using any GPUs

                  Not all file locations perform the same. In particular, the $VSC_HOME and $VSC_DATA directories are, relatively, very slow to access. Your jobs should rather use the $VSC_SCRATCH directory, or other fast locations (depending on your needs), described in Where to store your data on the HPC. As an example how to do this: The job can copy the input to the scratch directory, then execute the computations, and lastly copy the output back to the data directory. Using the home and data directories is especially a problem when UGent isn't your home institution: your files may be stored, for example, in Leuven while you're running a job in Ghent.

                  "}, {"location": "FAQ/#my-mpi-job-fails", "title": "My MPI job fails", "text": "

                  Use mympirun in your job script instead of mpirun. It is a tool that makes sure everything gets set up correctly for the HPC infrastructure. You need to load it as a module in your job script: module load vsc-mympirun.

                  To submit the job, use the qsub command rather than sbatch. Although both will submit a job, qsub will correctly interpret the #PBS parameters inside the job script. sbatch might not set the job environment up correctly for mympirun/OpenMPI.

                  See also: Multi core jobs/Parallel Computing and Mympirun.

                  "}, {"location": "FAQ/#mympirun-seems-to-ignore-its-arguments", "title": "mympirun seems to ignore its arguments", "text": "

                  For example, we have a simple script (./hello.sh):

                  #!/bin/bash \necho \"hello world\"\n

                  And we run it like mympirun ./hello.sh --output output.txt.

                  To our surprise, this doesn't output to the file output.txt, but to standard out! This is because mympirun expects the program name and the arguments of the program to be its last arguments. Here, the --output output.txt arguments are passed to ./hello.sh instead of to mympirun. The correct way to run it is:

                  mympirun --output output.txt ./hello.sh\n
                  "}, {"location": "FAQ/#when-will-my-job-start", "title": "When will my job start?", "text": "

                  See the explanation about how jobs get prioritized in When will my job start.

                  "}, {"location": "FAQ/#why-do-i-get-a-no-space-left-on-device-error-while-i-still-have-storage-space-left", "title": "Why do I get a \"No space left on device\" error, while I still have storage space left?", "text": "

                  When trying to create files, errors like this can occur:

                  No space left on device\n

                  The error \"No space left on device\" can mean two different things:

                  • all available storage quota on the file system in question has been used;
                  • the inode limit has been reached on that file system.

                  An inode can be seen as a \"file slot\", meaning that when the limit is reached, no more additional files can be created. There is a standard inode limit in place that will be increased if needed. The number of inodes used per file system can be checked on the VSC account page.

                  Possible solutions to this problem include cleaning up unused files and directories or compressing directories with a lot of files into zip- or tar-files.

                  If the problem persists, feel free to contact support.

                  "}, {"location": "FAQ/#other", "title": "Other", "text": ""}, {"location": "FAQ/#can-i-share-my-account-with-someone-else", "title": "Can I share my account with someone else?", "text": "

                  NO. You are not allowed to share your VSC account with anyone else, it is strictly personal.

                  See https://helpdesk.ugent.be/account/en/regels.php.

                  If you want to share data, there are alternatives (like a shared directories in VO space, see Virtual organisations).

                  "}, {"location": "FAQ/#can-i-share-my-data-with-other-hpc-users", "title": "Can I share my data with other HPC users?", "text": "

                  Yes, you can use the chmod or setfacl commands to change permissions of files so other users can access the data. For example, the following command will enable a user named \"otheruser\" to read the file named dataset.txt. See

                  $ setfacl -m u:otheruser:r dataset.txt\n$ ls -l dataset.txt\n-rwxr-x---+ 2 vsc40000 mygroup      40 Apr 12 15:00 dataset.txt\n

                  For more information about chmod or setfacl, see Linux tutorial.

                  "}, {"location": "FAQ/#can-i-use-multiple-different-ssh-key-pairs-to-connect-to-my-vsc-account", "title": "Can I use multiple different SSH key pairs to connect to my VSC account?", "text": "

                  Yes, and this is recommended when working from different computers. Please see Adding multiple SSH public keys on how to do this.

                  "}, {"location": "FAQ/#i-want-to-use-software-that-is-not-available-on-the-clusters-yet", "title": "I want to use software that is not available on the clusters yet", "text": "

                  Please fill out the details about the software and why you need it in this form: https://www.ugent.be/hpc/en/support/software-installation-request. When submitting the form, a mail will be sent to hpc@ugent.be containing all the provided information. The HPC team will look into your request as soon as possible you and contact you when the installation is done or if further information is required.

                  If the software is a Python package, you can manually install it in a virtual environment. More information can be found here. Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements.

                  "}, {"location": "FAQ/#is-my-connection-compromised-remote-host-identification-has-changed", "title": "Is my connection compromised? Remote host identification has changed", "text": "

                  On Monday 25 April 2022, the login nodes received an update to RHEL8. This means that the host keys of those servers also changed. As a result, you could encounter the following warnings.

                  MacOS & Linux (on Windows, only the second part is shown):

                  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\nIt is also possible that a host key has just been changed.\nThe fingerprint for the RSA key sent by the remote host is\nxx:xx:xx.\nPlease contact your system administrator.\nAdd correct host key in /home/hostname/.ssh/known_hosts to get rid of this message.\nOffending RSA key in /var/lib/sss/pubconf/known_hosts:1\nRSA host key for user has changed and you have requested strict checking.\nHost key verification failed.\n

                  Please follow the instructions at migration to RHEL8 to ensure it really is not a hacking attempt - you will find the correct host key to compare. You will also find how to hide the warning.

                  "}, {"location": "FAQ/#vo-how-does-it-work", "title": "VO: how does it work?", "text": "

                  A Virtual Organisation consists of a number of members and moderators. A moderator can:

                  • Manage the VO members (but can't access/remove their data on the system).

                  • See how much storage each member has used, and set limits per member.

                  • Request additional storage for the VO.

                  One person can only be part of one VO, be it as a member or moderator. It's possible to leave a VO and join another one. However, it's not recommended to keep switching between VO's (to supervise groups, for example).

                  See also: Virtual Organisations.

                  "}, {"location": "FAQ/#my-ugent-shared-drives-dont-show-up", "title": "My UGent shared drives don't show up", "text": "

                  After mounting the UGent shared drives with kinit your_email@ugent.be, you might not see an entry with your username when listing ls /UGent. This is normal: try ls /UGent/your_username or cd /UGent/your_username, and you should be able to access the drives. Be sure to use your UGent username and not your VSC username here.

                  See also: Your UGent home drive and shares.

                  "}, {"location": "FAQ/#my-home-directory-is-almost-full-and-i-dont-know-why", "title": "My home directory is (almost) full, and I don't know why", "text": "

                  Your home directory might be full without looking like it due to hidden files. Hidden files and subdirectories have a name starting with a dot and do not show up when running ls. If you want to check where the storage in your home directory is used, you can make use of the du command to find out what the largest files and subdirectories are:

                  du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'\n

                  The du command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an egrep to filter the lines to the ones that matter the most.

                  The egrep command will only let entries that match with the specified regular expression [0-9]{3}M|[0-9]G through, which corresponds with files that consume more than 100 MB.

                  "}, {"location": "FAQ/#how-can-i-get-more-storage-space", "title": "How can I get more storage space?", "text": "

                  By default you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data ($VSC_DATA) and scratch ($VSC_SCRATCH) filesystems. It is not possible to expand the storage quota for these personal directories.

                  You can get more storage space through a Virtual Organisation (VO), which will give you access to the additional directories in a subdirectory specific to that VO ($VSC_DATA_VO and $VSC_SCRATCH_VO). The moderators of a VO can request more storage for their VO.

                  "}, {"location": "FAQ/#why-cant-i-use-the-sudo-command", "title": "Why can't I use the sudo command?", "text": "

                  When you attempt to use sudo, you will be prompted for a password. However, you cannot enter a valid password because this feature is reserved exclusively for HPC administrators.

                  sudo is used to execute a command with administrator rights, which would allow you to make system-wide changes. You are only able to run commands that make changes to the directories that your VSC account has access to, like your home directory, your personal directories like $VSC_DATA and $VSC_SCRATCH, or shared VO/group directories like $VSC_DATA_VO and $VSC_SCRATCH_VO.

                  A lot of tasks can be performed without sudo, including installing software in your own account.

                  Installing software

                  • If you know how to install the software without using sudo, you are welcome to proceed with the installation.
                  • If you are unsure how to install the software, you can submit a software installation request, and the HPC-UGent support team will handle the installation for you.
                  "}, {"location": "FAQ/#i-have-another-questionproblem", "title": "I have another question/problem", "text": "

                  Who can I contact?

                  • General questions regarding HPC-UGent and VSC: hpc@ugent.be

                  • HPC-UGent Tier-2: hpc@ugent.be

                  • VSC Tier-1 compute: compute@vscentrum.be

                  • VSC Tier-1 cloud: cloud@vscentrum.be

                  "}, {"location": "HOD/", "title": "Hanythingondemand (HOD)", "text": "

                  Hanythingondemand (or HOD for short) is a tool to run a Hadoop (Yarn) cluster on a traditional HPC system.

                  "}, {"location": "HOD/#documentation", "title": "Documentation", "text": "

                  The official documentation for HOD version 3.0.0 and newer is available at https://hod.readthedocs.org/en/latest/. The slides of the 2016 HOD training session are available at http://users.ugent.be/~kehoste/hod_20161024.pdf.

                  "}, {"location": "HOD/#using-hod", "title": "Using HOD", "text": "

                  Before using HOD, you first need to load the hod module. We don't specify a version here (this is an exception, for most other modules you should, see Using explicit version numbers) because newer versions might include important bug fixes.

                  module load hod\n
                  "}, {"location": "HOD/#compatibility-with-login-nodes", "title": "Compatibility with login nodes", "text": "

                  The hod modules are constructed such that they can be used on the HPC-UGent infrastructure login nodes, regardless of which cluster module is loaded (this is not the case for software installed via modules in general, see Running software that is incompatible with host).

                  As such, you should experience no problems if you swap to a different cluster module before loading the hod module and subsequently running |hod|.

                  For example, this will work as expected:

                  $ module swap cluster/donphan\n$ module load hod\n$ hod\nhanythingondemand - Run services within an HPC cluster\nusage: hod <subcommand> [subcommand options]\nAvailable subcommands (one of these must be specified!):\n    batch           Submit a job to spawn a cluster on a PBS job controller, run a job script, and tear down the cluster when it's done\n    clean           Remove stale cluster info.\n...\n

                  Note that also modules named hanythingondemand/* are available. These should however not be used directly, since they may not be compatible with the login nodes (depending on which cluster they were installed for).

                  "}, {"location": "HOD/#standard-hod-configuration", "title": "Standard HOD configuration", "text": "

                  The hod module will also put a basic configuration in place for HOD, by defining a couple of $HOD_* environment variables:

                  $ module load hod\n$ env | grep HOD | sort\nHOD_BATCH_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_BATCH_WORKDIR=$VSC_SCRATCH/hod\nHOD_CREATE_HOD_MODULE=hanythingondemand/3.2.2-intel-2016b-Python-2.7.12\nHOD_CREATE_WORKDIR=$VSC_SCRATCH/hod\n

                  By defining these environment variables, we avoid that you have to specify --hod-module and --workdir when using hod batch or hod create, since they are strictly required.

                  If you want to use a different parent working directory for HOD, it suffices to either redefine $HOD_BATCH_WORKDIR and $HOD_CREATE_WORKDIR, or to specify --workdir (which will override the corresponding environment variable).

                  Changing the HOD module that is used by the HOD backend (i.e., using --hod-module or redefining $HOD_*_HOD_MODULE) is strongly discouraged.

                  "}, {"location": "HOD/#cleaning-up", "title": "Cleaning up", "text": "

                  After HOD clusters terminate, their local working directory and cluster information is typically not cleaned up automatically (for example, because the job hosting an interactive HOD cluster submitted via hod create runs out of walltime).

                  These HOD clusters will still show up in the output of hod list, and will be marked as <job-not-found>.

                  You should occasionally clean this up using hod clean:

                  $ module list\nCurrently Loaded Modulefiles:\n  1) cluster/doduo(default)   2) pbs_python/4.6.0            3) vsc-base/2.4.2              4) hod/3.0.0-cli\n\n$ hod list\nCluster label   Job ID         State                Hosts\nexample1        123456         &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/123456 for cluster labeled example1\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example1\n\n$ module swap cluster/donphan\nCluster label   Job ID                          State               Hosts\nexample2        98765.master19.donphan.gent.vsc &lt;job-not-found&gt;     &lt;none&gt;\n\n$ hod clean\nRemoved cluster localworkdir directory /user/scratch/gent/vsc400/vsc40000/hod/hod/98765.master19.donphan.gent.vsc for cluster labeled example2\nRemoved cluster info directory /user/home/gent/vsc400/vsc40000/.config/hod.d/wordcount for cluster labeled example2\n
                  Note that only HOD clusters that were submitted to the currently loaded cluster module will be cleaned up.

                  "}, {"location": "HOD/#getting-help", "title": "Getting help", "text": "

                  If you have any questions, or are experiencing problems using HOD, you have a couple of options:

                  • Subscribe to the HOD mailing list via https://lists.ugent.be/wws/info/hod, and contact the HOD users and developers at hod@lists.ugent.be.

                  • Contact the HPC-UGent team via hpc@ugent.be

                  • Open an issue in the hanythingondemand GitHub repository, via https://github.com/hpcugent/hanythingondemand/issues.

                  "}, {"location": "MATLAB/", "title": "MATLAB", "text": "

                  Note

                  To run a MATLAB program on the HPC-UGent infrastructure you must compile it first, because the MATLAB license server is not accessible from cluster workernodes (except for the interactive debug cluster).

                  Compiling MATLAB programs is only possible on the interactive debug cluster, not on the HPC-UGent login nodes where resource limits w.r.t. memory and max. number of progress are too strict.

                  "}, {"location": "MATLAB/#why-is-the-matlab-compiler-required", "title": "Why is the MATLAB compiler required?", "text": "

                  The main reason behind this alternative way of using MATLAB is licensing: only a limited number of MATLAB sessions can be active at the same time. However, once the MATLAB program is compiled using the MATLAB compiler, the resulting stand-alone executable can be run without needing to contact the license server.

                  Note that a license is required for the MATLAB Compiler, see https://nl.mathworks.com/help/compiler/index.html. If the mcc command is provided by the MATLAB installation you are using, the MATLAB compiler can be used as explained below.

                  Only a limited amount of MATLAB sessions can be active at the same time because there are only a limited amount of MATLAB research licenses available on the UGent MATLAB license server. If each job would need a license, licenses would quickly run out.

                  "}, {"location": "MATLAB/#how-to-compile-matlab-code", "title": "How to compile MATLAB code", "text": "

                  Compiling MATLAB code can only be done from the login nodes, because only login nodes can access the MATLAB license server, workernodes on clusters cannot.

                  To access the MATLAB compiler, the MATLAB module should be loaded first. Make sure you are using the same MATLAB version to compile and to run the compiled MATLAB program.

                  $ module avail MATLAB/\n----------------------/apps/gent/RHEL8/zen2-ib/modules/all----------------------\n   MATLAB/2021b    MATLAB/2022b-r5 (D)\n$ module load MATLAB/2021b\n

                  After loading the MATLAB module, the mcc command can be used. To get help on mcc, you can run mcc -?.

                  To compile a standalone application, the -m flag is used (the -v flag means verbose output). To show how mcc can be used, we use the magicsquare example that comes with MATLAB.

                  First, we copy the magicsquare.m example that comes with MATLAB to example.m:

                  cp $EBROOTMATLAB/extern/examples/compiler/magicsquare.m example.m\n

                  To compile a MATLAB program, use mcc -mv:

                  mcc -mv example.m\nOpening log file:  /user/home/gent/vsc400/vsc40000/java.log.34090\nCompiler version: 8.3 (R2021b)\nDependency analysis by REQUIREMENTS.\nParsing file \"/user/home/gent/vsc400/vsc40000/example.m\"\n    (Referenced from: \"Compiler Command Line\").\nDeleting 0 temporary MEX authorization files.\nGenerating file \"/user/home/gent/vsc400/vsc40000/readme.txt\".\nGenerating file \"run\\_example.sh\".\n
                  "}, {"location": "MATLAB/#libraries", "title": "Libraries", "text": "

                  To compile a MATLAB program that needs a library, you can use the -I library_path flag. This will tell the compiler to also look for files in library_path.

                  It's also possible to use the -a path flag. That will result in all files under the path getting added to the final executable.

                  For example, the command mcc -mv example.m -I examplelib -a datafiles will compile example.m with the MATLAB files in examplelib, and will include all files in the datafiles directory in the binary it produces.

                  "}, {"location": "MATLAB/#memory-issues-during-compilation", "title": "Memory issues during compilation", "text": "

                  If you are seeing Java memory issues during the compilation of your MATLAB program on the login nodes, consider tweaking the default maximum heap size (128M) of Java using the _JAVA_OPTIONS environment variable with:

                  export _JAVA_OPTIONS=\"-Xmx64M\"\n

                  The MATLAB compiler spawns multiple Java processes. Because of the default memory limits that are in effect on the login nodes, this might lead to a crash of the compiler if it's trying to create to many Java processes. If we lower the heap size, more Java processes will be able to fit in memory.

                  Another possible issue is that the heap size is too small. This could result in errors like:

                  Error: Out of memory\n

                  A possible solution to this is by setting the maximum heap size to be bigger:

                  export _JAVA_OPTIONS=\"-Xmx512M\"\n
                  "}, {"location": "MATLAB/#multithreading", "title": "Multithreading", "text": "

                  MATLAB can only use the cores in a single workernode (unless the Distributed Computing toolbox is used, see https://nl.mathworks.com/products/distriben.html).

                  The amount of workers used by MATLAB for the parallel toolbox can be controlled via the parpool function: parpool(16) will use 16 workers. It's best to specify the amount of workers, because otherwise you might not harness the full compute power available (if you have too few workers), or you might negatively impact performance (if you have too many workers). By default, MATLAB uses a fixed number of workers (12).

                  You should use a number of workers that is equal to the number of cores you requested when submitting your job script (the ppn value, see Generic resource requirements). You can determine the right number of workers to use via the following code snippet in your MATLAB program:

                  parpool.m
                  % specify the right number of workers (as many as there are cores available in the job) when creating the parpool\nc = parcluster('local')\npool = parpool(c.NumWorkers)\n

                  See also the parpool documentation.

                  "}, {"location": "MATLAB/#java-output-logs", "title": "Java output logs", "text": "

                  Each time MATLAB is executed, it generates a Java log file in the users home directory. The output log directory can be changed using:

                  MATLAB_LOG_DIR=<OUTPUT_DIR>\n

                  where <OUTPUT_DIR> is the name of the desired output directory. To create and use a temporary directory for these logs:

                  # create unique temporary directory in $TMPDIR (or /tmp/$USER if\n$TMPDIR is not defined)\n# instruct MATLAB to use this directory for log files by setting $MATLAB_LOG_DIR\n$  export MATLAB_LOG_DIR=$ (mktemp -d -p $TMPDIR:-/tmp/$USER)\n

                  You should remove the directory at the end of your job script:

                  rm -rf $MATLAB_LOG_DIR\n
                  "}, {"location": "MATLAB/#cache-location", "title": "Cache location", "text": "

                  When running, MATLAB will use a cache for performance reasons. This location and size of this cache can be changed through the MCR_CACHE_ROOT and MCR_CACHE_SIZE environment variables.

                  The snippet below would set the maximum cache size to 1024MB and the location to /tmp/testdirectory.

                  export MATLAB_CACHE_ROOT=/tmp/testdirectory \nexport MATLAB_CACHE_SIZE=1024M \n

                  So when MATLAB is running, it can fill up to 1024MB of cache in /tmp/testdirectory.

                  "}, {"location": "MATLAB/#matlab-job-script", "title": "MATLAB job script", "text": "

                  All of the tweaks needed to get MATLAB working have been implemented in an example job script. This job script is also available on the HPC.

                  jobscript.sh
                  #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=1:0:0\n#\n# Example (single-core) MATLAB job script\n#\n\n# make sure the MATLAB version matches with the one used to compile the MATLAB program!\nmodule load MATLAB/2021b\n\n# use temporary directory (not $HOME) for (mostly useless) MATLAB log files\n# subdir in $TMPDIR (if defined, or /tmp otherwise)\nexport MATLAB_LOG_DIR=$(mktemp -d -p  ${TMPDIR:-/tmp})\n\n# configure MATLAB Compiler Runtime cache location & size (1GB)\n# use a temporary directory in /dev/shm (i.e. in memory) for performance reasons\nexport MCR_CACHE_ROOT=$(mktemp -d -p /dev/shm)\nexport MCR_CACHE_SIZE=1024MB\n\n# change to directory where job script was submitted from\ncd $PBS_O_WORKDIR\n\n# run compiled example MATLAB program 'example', provide '5' as input argument to the program\n# $EBROOTMATLAB points to MATLAB installation directory\n./run_example.sh $EBROOTMATLAB 5\n
                  "}, {"location": "VNC/", "title": "Graphical applications with VNC", "text": "

                  VNC is still available at UGent site but we encourage our users to replace VNC by X2Go client. Please see Graphical applications with X2Go for more information.

                  Virtual Network Computing is a graphical desktop sharing system that enables you to interact with graphical software running on the HPC infrastructure from your own computer.

                  Please carefully follow the instructions below, since the procedure to connect to a VNC server running on the HPC infrastructure is not trivial, due to security constraints.

                  "}, {"location": "VNC/#starting-a-vnc-server", "title": "Starting a VNC server", "text": "

                  First login on the login node (see First time connection to the HPC infrastructure, then start vncserver with:

                  $ vncserver -geometry 1920x1080 -localhost\nYou will require a password to access your desktops.\n\nPassword: <enter a secure password>\nVerify: <enter the same password>\nWould you like to enter a view-only password (y/n)? n\nA view-only password is not used\n\nNew 'gligar07.gastly.os:6 (vsc40000)' desktop is gligar07.gastly.os:6\n\nCreating default startup script /user/home/gent/vsc400/vsc40000.vnc/xstartup\nCreating default config /user/home/gent/vsc400/vsc40000.vnc/config\nStarting applications specified in /user/home/gent/vsc400/vsc40000.vnc/xstartup\nLog file is /user/home/gent/vsc400/vsc40000.vnc/gligar07.gastly.os:6.log\n

                  When prompted for a password, make sure to enter a secure password: if someone can guess your password, they will be able to do anything with your account you can!

                  Note down the details in bold: the hostname (in the example: gligar07.gastly.os) and the (partial) port number (in the example: 6).

                  It's important to remember that VNC sessions are permanent. They survive network problems and (unintended) connection loss. This means you can logout and go home without a problem (like the terminal equivalent screen or tmux). This also means you don't have to start vncserver each time you want to connect.

                  "}, {"location": "VNC/#list-running-vnc-servers", "title": "List running VNC servers", "text": "

                  You can get a list of running VNC servers on a node with

                  $ vncserver -list\nTigerVNC server sessions:\n\nX DISPLAY # PROCESS ID\n:6          30713\n

                  This only displays the running VNC servers on the login node you run the command on.

                  To see what login nodes you are running a VNC server on, you can run the ls .vnc/*.pid command in your home directory: the files shown have the hostname of the login node in the filename:

                  $ cd $HOME\n$ ls .vnc/*.pid\n.vnc/gligar07.gastly.os:6.pid\n.vnc/gligar08.gastly.os:8.pid\n

                  This shows that there is a VNC server running on gligar07.gastly.os on port 5906 and another one running gligar08.gastly.os on port 5908 (see also Determining the source/destination port).

                  "}, {"location": "VNC/#connecting-to-a-vnc-server", "title": "Connecting to a VNC server", "text": "

                  The VNC server runs on a (in the example above, on gligar07.gastly.os).

                  In order to access your VNC server, you will need to set up an SSH tunnel from your workstation to this login node (see Setting up the SSH tunnel(s)).

                  Login nodes are rebooted from time to time. You can check that the VNC server is still running in the same node by executing vncserver -list (see also List running VNC servers). If you get an empty list, it means that there is no VNC server running on the login node.

                  To set up the SSH tunnel required to connect to your VNC server, you will need to port forward the VNC port to your workstation.

                  The host is localhost, which means \"your own computer\": we set up an SSH tunnel that connects the VNC port on the login node to the same port on your local computer.

                  "}, {"location": "VNC/#determining-the-sourcedestination-port", "title": "Determining the source/destination port", "text": "

                  The destination port is the port on which the VNC server is running (on the login node), which is the sum of 5900 and the partial port number we noted down earlier (6); in the running example, that is 5906.

                  The source port is the port you will be connecting to with your VNC client on your workstation. Although you can use any (free) port for this, we strongly recommend to use the same value as the destination port.

                  So, in our running example, both the source and destination ports are 5906.

                  "}, {"location": "VNC/#picking-an-intermediate-port-to-connect-to-the-right-login-node", "title": "Picking an intermediate port to connect to the right login node", "text": "

                  In general, you have no control over which login node you will be on when setting up the SSH tunnel from your workstation to login.hpc.ugent.be (see Setting up the SSH tunnel(s)).

                  If the login node you end up on is a different one than the one where your VNC server is running (i.e., gligar08.gastly.os rather than gligar07.gastly.os in our running example), you need to create a second SSH tunnel on the login node you are connected to, in order to \"patch through\" to the correct port on the login node where your VNC server is running.

                  In the remainder of these instructions, we will assume that we are indeed connected to a different login node. Following these instructions should always work, even if you happen to be connected to the correct login node.

                  To set up the second SSH tunnel, you need to pick an (unused) port on the login node you are connected to, which will be used as an intermediate port.

                  Now we have a chicken-egg situation: you need to pick a port before setting up the SSH tunnel from your workstation to gligar07.gastly.os, but only after starting the SSH tunnel will you be able to determine whether the port you picked is actually free or not...

                  In practice, if you pick a random number between $10000$ and $30000$, you have a good chance that the port will not be used yet.

                  We will proceed with $12345$ as intermediate port, but you should pick another value that other people are not likely to pick. If you need some inspiration, run the following command on a Linux server (for example on a login node): echo $RANDOM (but do not use a value lower than $1025$).

                  "}, {"location": "VNC/#setting-up-the-ssh-tunnels", "title": "Setting up the SSH tunnel(s)", "text": ""}, {"location": "VNC/#setting-up-the-first-ssh-tunnel-from-your-workstation-to-loginhpcugentbe", "title": "Setting up the first SSH tunnel from your workstation to login.hpc.ugent.be", "text": "

                  First, we will set up the SSH tunnel from our workstation to .

                  Use the settings specified in the sections above:

                  • source port: the port on which the VNC server is running (see Determining the source/destination port);

                  • destination host: localhost;

                  • destination port: use the intermediate port you picked (see Picking an intermediate port to connect to the right login node)

                  Execute the following command to set up the SSH tunnel.

                  ssh -L 5906:localhost:12345  vsc40000@login.hpc.ugent.be\n

                  Replace the source port 5906, destination port 12345 and user ID vsc40000 with your own!

                  With this, we have forwarded port 5906 on our workstation to port 12345 on the login node we are connected to.

                  Again, do not use 12345 as destination port, as this port will most likely be used by somebody else already; replace it with a port number you picked yourself, which is unlikely to be used already (see Picking an intermediate port to connect to the right login node).

                  "}, {"location": "VNC/#checking-whether-the-intermediate-port-is-available", "title": "Checking whether the intermediate port is available", "text": "

                  Before continuing, it's good to check whether the intermediate port that you have picked is actually still available (see Picking an intermediate port to connect to the right login node).

                  You can check using the following command (**do not forget to replace 12345 the value you picked for your intermediate port):

                  netstat -an | grep -i listen | grep tcp | grep 12345\n

                  If you see no matching lines, then the port you picked is still available, and you can continue.

                  If you see one or more matching lines as shown below, you must disconnect the first SSH tunnel, pick a different intermediate port, and set up the first SSH tunnel again using the new value.

                  $ netstat -an | grep -i listen | grep tcp | grep 12345\ntcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN\ntcp6       0      0 :::12345                :::*                    LISTEN\n$\n
                  "}, {"location": "VNC/#setting-up-the-second-ssh-tunnel-to-the-correct-login-node", "title": "Setting up the second SSH tunnel to the correct login node", "text": "

                  In the session on the login node you created by setting up an SSH tunnel from your workstation to login.hpc.ugent.be, you now need to set up the second SSH tunnel to \"patch through\" to the login node where your VNC server is running (gligar07.gastly.os in our running example, see Starting a VNC server).

                  To do this, run the following command:

                  $ ssh -L 12345:localhost:5906 gligar07.gastly.os\n$ hostname\ngligar07.gastly.os\n

                  With this, we are forwarding port 12345 on the login node we are connected to (which is referred to as localhost) through to port 5906 on our target login node (gligar07.gastly.os).

                  Combined with the first SSH tunnel, port 5906 on our workstation is now connected to port 5906 on the login node where our VNC server is running (via the intermediate port 12345 on the login node we ended up one with the first SSH tunnel).

                  **Do not forget to change the intermediate port (12345), destination port (5906), and hostname of the login node (gligar07.gastly.os) in the command shown above!

                  As shown above, you can check again using the hostname command whether you are indeed connected to the right login node. If so, you can go ahead and connect to your VNC server (see Connecting using a VNC client).

                  "}, {"location": "VNC/#connecting-using-a-vnc-client", "title": "Connecting using a VNC client", "text": "

                  You can download a free VNC client from https://sourceforge.net/projects/turbovnc/files/. You can download the latest version by clicking the top-most folder that has a version number in it that doesn't also have beta in the version. Then download a file ending in TurboVNC64-2.1.2.dmg (the version number can be different) and execute it.

                  Now start your VNC client and connect to localhost:5906. **Make sure you replace the port number 5906 with your own destination port (see Determining the source/destination port).

                  When prompted for a password, use the password you used to setup the VNC server.

                  When prompted for default or empty panel, choose default.

                  If you have an empty panel, you can reset your settings with the following commands:

                  xfce4-panel --quit ; pkill xfconfd\nmkdir ~/.oldxfcesettings\nmv ~/.config/xfce4 ~/.oldxfcesettings\nxfce4-panel\n
                  "}, {"location": "VNC/#stopping-the-vnc-server", "title": "Stopping the VNC server", "text": "

                  The VNC server can be killed by running

                  vncserver -kill :6\n

                  where 6 is the port number we noted down earlier. If you forgot, you can get it with vncserver -list (see List running VNC servers).

                  "}, {"location": "VNC/#i-forgot-the-password-what-now", "title": "I forgot the password, what now?", "text": "

                  You can reset the password by first stopping the VNC server (see ), then removing the .vnc/passwd file (with rm .vnc/passwd) and then starting the VNC server again (see Starting a VNC server).

                  "}, {"location": "account/", "title": "Getting an HPC Account", "text": ""}, {"location": "account/#getting-ready-to-request-an-account", "title": "Getting ready to request an account", "text": "

                  All users of AUGent can request an account on the HPC, which is part of the Flemish Supercomputing Centre (VSC).

                  See HPC policies for more information on who is entitled to an account.

                  The VSC, abbreviation of Flemish Supercomputer Centre, is a virtual supercomputer centre. It is a partnership between the five Flemish associations: the Association KU\u00a0Leuven, Ghent University Association, Brussels University Association, Antwerp University Association and the University Colleges-Limburg. The VSC is funded by the Flemish Government.

                  There are two methods for connecting to HPC-UGent infrastructure:

                  • Using a terminal to connect via SSH.
                  • Using the web portal

                  The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

                  If you would like use a terminal with SSH as this gives you more flexibility continue reading. However if you prefer to use the web portal, you can skip ahead to the following section: Applying for the account. Once you have successfully obtained an account, you can then delve into the details of utilizing the HPC-UGent web portal by reading Using the HPC-UGent web portal.

                  The HPC-UGent infrastructure clusters use public/private key pairs for user authentication (rather than passwords). Technically, the private key is stored on your local computer and always stays there; the public key is stored on the HPC. Access to the HPC is granted to anyone who can prove to have access to the corresponding private key on his local computer.

                  "}, {"location": "account/#how-do-ssh-keys-work", "title": "How do SSH keys work?", "text": "
                  • an SSH public/private key pair can be seen as a lock and a key

                  • the SSH public key is equivalent with a lock: you give it to the VSC and they put it on the door that gives access to your account.

                  • the SSH private key is like a physical key: you don't hand it out to other people.

                  • anyone who has the key (and the optional password) can unlock the door and log in to the account.

                  • the door to your VSC account is special: it can have multiple locks (SSH public keys) attached to it, and you only need to open one lock with the corresponding key (SSH private key) to open the door (log in to the account).

                  Since all VSC clusters use Linux as their main operating system, you will need to get acquainted with using the command-line interface and using the terminal (see tutorial). To open a Terminal window in macOS, open the Finder and choose

                  >> Applications > Utilities > Terminal

                  Before requesting an account, you need to generate a pair of ssh keys. One popular way to do this on macOS is using the OpenSSH client included with macOS, which you can then also use to log on to the clusters.

                  "}, {"location": "account/#test-openssh", "title": "Test OpenSSH", "text": "

                  Secure Shell (ssh) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. In short, ssh provides a secure connection between 2 computers via insecure channels (Network, Internet, telephone lines, ...).

                  \"Secure\" means that:

                  1. the User is authenticated to the System; and

                  2. the System is authenticated to the User; and

                  3. all data is encrypted during transfer.

                  OpenSSH is a FREE implementation of the SSH connectivity protocol. macOS comes with its own implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

                  On all popular Linux distributions, the OpenSSH software is readily available, and most often installed by default. You can check whether the OpenSSH software is installed by opening a terminal and typing:

                  $ ssh -V\nOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\n

                  To access the clusters and transfer your files, you will use the following commands:

                  1. ssh-keygen: to generate the SSH key pair (public + private key);

                  2. ssh: to open a shell on a remote machine;

                  3. sftp: a secure equivalent of ftp;

                  4. scp: a secure equivalent of the remote copy command rcp.

                  "}, {"location": "account/#generate-a-publicprivate-key-pair-with-openssh", "title": "Generate a public/private key pair with OpenSSH", "text": "

                  A key pair might already be present in the default location inside your home directory. Therefore, we first check if a key is available with the \"list short\" (\"ls\") command:

                  ls ~/.ssh\n

                  If a key-pair is already available, you would normally get:

                  authorized_keys     id_rsa      id_rsa.pub      known_hosts\n

                  Otherwise, the command will show:

                  ls: .ssh: No such file or directory\n

                  You can recognise a public/private key pair when a pair of files has the same name except for the extension \".pub\" added to one of them. In this particular case, the private key is \"id_rsa\" and public key is \"id_rsa.pub\". You may have multiple keys (not necessarily in the directory \"~/.ssh\") if you or your operating system requires this. Be aware that your existing key pair might be too short, or not the right type.

                  You will need to generate a new key pair, when:

                  1. you don't have a key pair yet

                  2. you forgot the passphrase protecting your private key

                  3. your private key was compromised

                  4. your key pair is too short or not the right type

                  For extra security, the private key itself can be encrypted using a \"passphrase\", to prevent anyone from using your private key even when they manage to copy it. You have to \"unlock\" the private key by typing the passphrase. Be sure to never give away your private key, it is private and should stay private. You should not even copy it to one of your other machines, instead, you should create a new public/private key pair for each machine.

                  ssh-keygen -t rsa -b 4096\n

                  This will ask you for a file name to store the private and public key, and a passphrase to protect your private key. It needs to be emphasised that you really should choose the passphrase wisely! The system will ask you for it every time you want to use the private key that is every time you want to access the cluster or transfer your files.

                  Without your key pair, you won't be able to apply for a personal VSC account.

                  "}, {"location": "account/#using-an-ssh-agent-optional", "title": "Using an SSH agent (optional)", "text": "

                  Most recent Unix derivatives include by default an SSH agent to keep and manage the user SSH keys. If you use one of these derivatives you must include the new keys into the SSH manager keyring to be able to connect to the HPC cluster. If not, SSH client will display an error message (see Connecting) similar to this:

                  Agent admitted failure to sign using the key. \nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

                  This could be fixed using the ssh-add command. You can include the new private keys' identities in your keyring with:

                  ssh-add\n

                  Tip

                  Without extra options ssh-add adds any key located at $HOME/.ssh directory, but you can specify the private key location path as argument, as example: ssh-add /path/to/my/id_rsa.

                  Check that your key is available from the keyring with:

                  ssh-add -l\n

                  After these changes the key agent will keep your SSH key to connect to the clusters as usual.

                  Tip

                  You should execute ssh-add command again if you generate a new SSH key.

                  "}, {"location": "account/#applying-for-the-account", "title": "Applying for the account", "text": "

                  Visit https://account.vscentrum.be/

                  You will be redirected to our WAYF (Where Are You From) service where you have to select your \"Home Organisation\".

                  Select \"UGent\" in the dropdown box and optionally select \"Save my preference\" and \"permanently\".

                  Click Confirm

                  You will now be taken to the authentication page of your institute.

                  You will now have to log in with CAS using your UGent account.

                  You either have a login name of maximum 8 characters, or a (non-UGent) email address if you are an external user. In case of problems with your UGent password, please visit: https://password.ugent.be/. After logging in, you may be requested to share your information. Click \"Yes, continue\".

                  After you log in using your UGent login and password, you will be asked to upload the file that contains your public key, i.e., the file \"id_rsa.pub\" which you have generated earlier. Make sure that your public key is actually accepted for upload, because if it is in a wrong format, wrong type or too short, then it will be refused.

                  This file has been stored in the directory \"~/.ssh/\".

                  Tip

                  As \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing Cmd+Shift+G (or Cmd+Shift+.), which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"~/.ssh\" and press enter.

                  After you have uploaded your public key you will receive an e-mail with a link to confirm your e-mail address. After confirming your e-mail address the VSC staff will review and if applicable approve your account.

                  "}, {"location": "account/#welcome-e-mail", "title": "Welcome e-mail", "text": "

                  Within one day, you should receive a Welcome e-mail with your VSC account details.

                  Dear (Username), \nYour VSC-account has been approved by an administrator.\nYour vsc-username is vsc40000\n\nYour account should be fully active within one hour.\n\nTo check or update your account information please visit\nhttps://account.vscentrum.be/\n\nFor further info please visit https://www.vscentrum.be/user-portal\n\nKind regards,\n-- The VSC administrators\n

                  Now, you can start using the HPC. You can always look up your VSC id later by visiting https://account.vscentrum.be.

                  "}, {"location": "account/#adding-multiple-ssh-public-keys-optional", "title": "Adding multiple SSH public keys (optional)", "text": "

                  In case you are connecting from different computers to the login nodes, it is advised to use separate SSH public keys to do so. You should follow these steps.

                  1. Create a new public/private SSH key pair from the new computer. Repeat the process described in section\u00a0Generate a public/private key pair with OpenSSH.

                  2. Go to https://account.vscentrum.be/django/account/edit

                  3. Upload the new SSH public key using the Add public key section. Make sure that your public key is actually saved, because a public key will be refused if it is too short, wrong type, or in a wrong format.

                  4. (optional) If you lost your key, you can delete the old key on the same page. You should keep at least one valid public SSH key in your account.

                  5. Take into account that it will take some time before the new SSH public key is active in your account on the system; waiting for 15-30 minutes should be sufficient.

                  "}, {"location": "account/#computation-workflow-on-the-hpc", "title": "Computation Workflow on the HPC", "text": "

                  A typical Computation workflow will be:

                  1. Connect to the HPC

                  2. Transfer your files to the HPC

                  3. Compile your code and test it

                  4. Create a job script

                  5. Submit your job

                  6. Wait while

                    1. your job gets into the queue

                    2. your job gets executed

                    3. your job finishes

                  7. Move your results

                  We'll take you through the different tasks one by one in the following chapters.

                  "}, {"location": "alphafold/", "title": "AlphaFold", "text": ""}, {"location": "alphafold/#what-is-alphafold", "title": "What is AlphaFold?", "text": "

                  AlphaFold is an AI system developed by DeepMind that predicts a protein\u2019s 3D structure from its amino acid sequence. It aims to achieve accuracy competitive with experimental methods.

                  See https://www.vscentrum.be/alphafold for more information and there you can also find a getting started video recording if you prefer that.

                  "}, {"location": "alphafold/#documentation-extra-material", "title": "Documentation & extra material", "text": "

                  This chapter focuses specifically on the use of AlphaFold on the HPC-UGent infrastructure. It is intented to augment the existing AlphaFold documentation rather than replace it. It is therefore recommended to first familiarize yourself with AlphaFold. The following resources can be helpful:

                  • AlphaFold website: https://alphafold.com/
                  • AlphaFold repository: https://github.com/deepmind/alphafold/tree/main
                  • AlphaFold FAQ: https://alphafold.com/faq
                  • VSC webpage about AlphaFold: https://www.vscentrum.be/alphafold
                  • Introductory course on AlphaFold by VIB: https://elearning.vib.be/courses/alphafold
                  • \"Getting Started with AlphaFold\" presentation by Kenneth Hoste (HPC-UGent)
                    • recording available on YouTube
                    • slides available here (PDF)
                    • see also https://www.vscentrum.be/alphafold
                  "}, {"location": "alphafold/#using-alphafold-on-hpc-ugent-infrastructure", "title": "Using AlphaFold on HPC-UGent infrastructure", "text": "

                  Several different versions of AlphaFold are installed on both the CPU and GPU HPC-UGent Tier-2 clusters, see the output of module avail AlphaFold. If you run this command on a GPU cluster, additional CUDA modules will show up:

                  $ module avail AlphaFold\n\n------------ /apps/gent/RHEL8/cascadelake-volta-ib/modules/all -------------\n   AlphaFold/2.0.0-fosscuda-2020b\n   AlphaFold/2.1.1-fosscuda-2020b\n   AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1\n   AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1\n   AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\n--------------- /apps/gent/RHEL8/cascadelake-ib/modules/all ----------------\n   AlphaFold/2.0.0-foss-2020b    AlphaFold/2.3.1-foss-2022a\n   AlphaFold/2.1.2-foss-2021a    AlphaFold/2.3.4-foss-2022a-ColabFold (D)\nAlphaFold/2.2.2-foss-2021a\n

                  To use AlphaFold, you should load a particular module, for example:

                  module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n

                  We strongly advise loading a specific version of an AlphaFold module, so you know exactly which version is being used.

                  Warning

                  When using AlphaFold, you should submit jobs to a GPU cluster for better performance, see GPU clusters. Later in this chapter, you will find a comparison between running AlphaFold on CPUs or GPUs.

                  Multiple revisions of the large database (~2.5TB) that is also required to run AlphaFold have been made available on the HPC-UGent infrastructure in a central location (/arcanine/scratch/gent/apps/AlphaFold), so you do not have to download it yourself.

                  $ ls /arcanine/scratch/gent/apps/AlphaFold\n20210812  20211201  20220701  20230310\n

                  The directories located there indicate when the data was downloaded, so that this leaves room for providing updated datasets later.

                  As of writing this documentation the latest version is 20230310.

                  Info

                  The arcanine scratch shared filesystem is powered by fast SSD disks, which is recommended for the AlphaFold data, because of random access I/O patterns. See Pre-defined user directories to get more info about the arcanine filesystem.

                  The AlphaFold installations we provide have been modified a bit to facilitate the usage on HPC-UGent infrastructure.

                  "}, {"location": "alphafold/#setting-up-the-environment", "title": "Setting up the environment", "text": "

                  The location to the AlphaFold data can be specified via the $ALPHAFOLD_DATA_DIR environment variable, so you should define this variable in your AlphaFold job script:

                  export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n

                  Use newest version

                  Do not forget to replace 20230310 with a more up to date version if available.

                  "}, {"location": "alphafold/#running-alphafold", "title": "Running AlphaFold", "text": "

                  AlphaFold provides a script called run_alphafold.py

                  A symbolic link named alphafold that points to this script is included, so you can just use alphafold instead of run_alphafold.py or python run_alphafold.py after loading the AlphaFold module.

                  The run_alphafold.py script has also been slightly modified such that defining the $ALPHAFOLD_DATA_DIR (see above) is sufficient to pick up all the data provided in that location, so you don't need to use options like --data_dir to specify the location of the data.

                  Similarly, the script was also tweaked such that the location to commands like hhblits,hhsearch,jackhmmer,kalign are already correctly set, so options like --hhblits_binary_path are not required.

                  For more information about the script and options see this section in the official README.

                  READ README

                  It is strongly advised to read the official README provided by DeepMind before continuing.

                  "}, {"location": "alphafold/#controlling-core-count-for-hhblits-and-jackhmmer", "title": "Controlling core count for hhblits and jackhmmer", "text": "

                  The Python scripts that are used to run hhblits and jackhmmer have been tweaked so you can control how many cores are used for these tools, rather than hardcoding it to 4 and 8 cores, respectively.

                  Using the $ALPHAFOLD_HHBLITS_N_CPU environment variable, you can specify how many cores should be used for running hhblits; the default of 4 cores will be used if $ALPHAFOLD_HHBLITS_N_CPU is not defined.

                  Likewise for jackhmmer, the core count can be controlled via $ALPHAFOLD_JACKHMMER_N_CPU.

                  Info

                  Tweaking this might not yield significant benefits, as we have noticed that these tools may exhibit slower performance when utilizing more than 4/8 cores (though this behavior could vary based on the workload).

                  "}, {"location": "alphafold/#cpugpu-comparison", "title": "CPU/GPU comparison", "text": "

                  The provided timings were obtained by executing the T1050.fasta example, as outlined in the Alphafold README. For the corresponding jobscripts, they are available here.

                  Using --db_preset=full_dbs, the following runtime data was collected:

                  • CPU-only, on doduo, using 24 cores (1 node): 9h 9min
                  • CPU-only, on doduo, using 96 cores (1 full node): 12h 22min
                  • GPU on joltik, using 1 V100 GPU + 8 cores: 2h 20min
                  • GPU on joltik, using 2 V100 GPUs + 16 cores: 2h 16min

                  This highlights a couple of important attention points:

                  • Running AlphaFold on GPU is significantly faster than CPU-only (close to 4x faster for this particular example).
                  • Using more CPU cores may lead to longer runtimes, so be careful with using full nodes when running AlphaFold CPU-only.
                  • Using multiple GPUs results in barely any speedup (for this particular T1050.fasta example).

                  With --db_preset=casp14, it is clearly more demanding:

                  • On doduo, with 24 cores (1 node): still running after 48h...
                  • On joltik, 1 V100 GPU + 8 cores: 4h 48min

                  This highlights the difference between CPU and GPU performance even more.

                  "}, {"location": "alphafold/#example-scenario", "title": "Example scenario", "text": "

                  The following example comes from the official Examples section in the Alphafold README. The run command is slightly different (see above: Running AlphaFold).

                  Do not forget to set up the environment (see above: Setting up the environment).

                  "}, {"location": "alphafold/#folding-a-monomer", "title": "Folding a monomer", "text": "

                  Say we have a monomer with the sequence <SEQUENCE>. Create a file monomer.fasta with the following content:

                  >sequence_name\n<SEQUENCE>\n

                  Then run the following command in the same directory:

                  alphafold --fasta_paths=monomer.fasta \\\n--max_template_date=2021-11-01 \\\n--model_preset=monomer \\\n--output_dir=.\n

                  See AlphaFold output, for information about the outputs.

                  Info

                  For more scenarios see the example section in the official README.

                  "}, {"location": "alphafold/#example-jobscripts", "title": "Example jobscripts", "text": "

                  The following two example job scripts can be used as a starting point for running AlphaFold.

                  The main difference between using a GPU or CPU in a job script is what module to load. For running AlphaFold on GPU, use an AlphaFold module that mentions CUDA (or cuda), for example AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0.

                  To run the job scripts you need to create a file named T1050.fasta with the following content:

                  >T1050 A7LXT1, Bacteroides Ovatus, 779 residues|\nMASQSYLFKHLEVSDGLSNNSVNTIYKDRDGFMWFGTTTGLNRYDGYTFKIYQHAENEPGSLPDNYITDIVEMPDGRFWINTARGYVLFDKERDYFITDVTGFMKNLESWGVPEQVFVDREGNTWLSVAGEGCYRYKEGGKRLFFSYTEHSLPEYGVTQMAECSDGILLIYNTGLLVCLDRATLAIKWQSDEIKKYIPGGKTIELSLFVDRDNCIWAYSLMGIWAYDCGTKSWRTDLTGIWSSRPDVIIHAVAQDIEGRIWVGKDYDGIDVLEKETGKVTSLVAHDDNGRSLPHNTIYDLYADRDGVMWVGTYKKGVSYYSESIFKFNMYEWGDITCIEQADEDRLWLGTNDHGILLWNRSTGKAEPFWRDAEGQLPNPVVSMLKSKDGKLWVGTFNGGLYCMNGSQVRSYKEGTGNALASNNVWALVEDDKGRIWIASLGGGLQCLEPLSGTFETYTSNNSALLENNVTSLCWVDDNTLFFGTASQGVGTMDMRTREIKKIQGQSDSMKLSNDAVNHVYKDSRGLVWIATREGLNVYDTRRHMFLDLFPVVEAKGNFIAAITEDQERNMWVSTSRKVIRVTVASDGKGSYLFDSRAYNSEDGLQNCDFNQRSIKTLHNGIIAIGGLYGVNIFAPDHIRYNKMLPNVMFTGLSLFDEAVKVGQSYGGRVLIEKELNDVENVEFDYKQNIFSVSFASDNYNLPEKTQYMYKLEGFNNDWLTLPVGVHNVTFTNLAPGKYVLRVKAINSDGYVGIKEATLGIVVNPPFKLAAALQHHHHHH\n
                  source: https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence

                  "}, {"location": "alphafold/#job-script-for-running-alphafold-on-gpu", "title": "Job script for running AlphaFold on GPU", "text": "

                  Job script that runs AlphaFold on GPU using 1 V100 GPU + 8 cores.

                  Swap to the joltik GPU before submitting it:

                  module swap cluster/joltik\n
                  AlphaFold-gpu-joltik.sh
                  #!/bin/bash\n#PBS -N AlphaFold-gpu-joltik\n#PBS -l nodes=1:ppn=8,gpus=1\n#PBS -l walltime=10:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0\n\nexport ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\n\necho \"Output available in $WORKDIR\"\n
                  "}, {"location": "alphafold/#job-script-for-running-alphafold-cpu-only", "title": "Job script for running AlphaFold CPU-only", "text": "

                  Jobscript that runs AlphaFold on CPU using 24 cores on one node.

                  AlphaFold-cpu-doduo.sh
                  #!/bin/bash\n#PBS -N AlphaFold-cpu-doduo\n#PBS -l nodes=1:ppn=24\n#PBS -l walltime=72:0:0\n\nmodule load AlphaFold/2.3.1-foss-2022a export ALPHAFOLD_DATA_DIR=/arcanine/scratch/gent/apps/AlphaFold/20230310\n\nWORKDIR=$VSC_SCRATCH/$PBS_JOBNAME-$PBS_JOBID\nmkdir -p $WORKDIR\n\n# download T1050.fasta via via https://www.predictioncenter.org/casp14/target.cgi?target=T1050&view=sequence\ncp -a $PBS_O_WORKDIR/T1050.fasta $WORKDIR/\n\ncd $WORKDIR\n\nalphafold --fasta_paths=T1050.fasta --max_template_date=2020-05-14 --db_preset=full_dbs --output_dir=$PWD\necho \"Output available in $WORKDIR\"\n

                  In case of problems or questions, don't hesitate to contact use at hpc@ugent.be.

                  "}, {"location": "apptainer/", "title": "Apptainer (formally known as Singularity)", "text": ""}, {"location": "apptainer/#what-is-apptainer", "title": "What is Apptainer?", "text": "

                  Apptainer is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

                  One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Apptainer/Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

                  For more general information about the use of Apptainer, please see the official documentation at https://apptainer.org/docs/.

                  This documentation only covers aspects of using Apptainer on the HPC-UGent infrastructure infrastructure.

                  "}, {"location": "apptainer/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

                  Some restrictions have been put in place on the use of Apptainer. This is mainly done for performance reasons and to avoid that the use of Apptainer impacts other users on the system.

                  The Apptainer/Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided apptainer command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

                  In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

                  If these limitations are a problem for you, please let us know via hpc@ugent.be.

                  "}, {"location": "apptainer/#available-filesystems", "title": "Available filesystems", "text": "

                  All HPC-UGent shared filesystems will be readily available in an Apptainer/Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

                  "}, {"location": "apptainer/#apptainersingularity-images", "title": "Apptainer/Singularity Images", "text": ""}, {"location": "apptainer/#creating-apptainersingularity-images", "title": "Creating Apptainer/Singularity images", "text": "

                  Creating new Apptainer/Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the HPC-UGent infrastructure infrastructure. However, if you use the --fakeroot option, you can make new Apptainer/Singularity images or convert Docker images.

                  Due to the nature of --fakeroot option, we recommend to write your Apptainer/Singularity image to a globally writable location, like /tmp, or /local directories. Once the image is created, you should move it to your desired destination. An example to make an Apptainer/Singularity container image:

                  # avoid that Apptainer uses $HOME/.cache\nexport APPTAINER_CACHEDIR=/tmp/$USER/apptainer/cache\n# instruct Apptainer to use temp dir on local filessytem\nexport APPTAINER_TMPDIR=/tmp/$USER/apptainer/tmpdir\n# specified temp dir must exist, so create it\nmkdir -p $APPTAINER_TMPDIR\n# convert Docker container to Apptainer container image\napptainer build --fakeroot /tmp/$USER/tf.sif docker://nvcr.io/nvidia/tensorflow:21.10-tf1-py3\n# mv container image to $VSC_SCRATCH\nmv /tmp/$USER/tf.sif $VSC_SCRATCH/tf.sif\n
                  "}, {"location": "apptainer/#converting-docker-images", "title": "Converting Docker images", "text": "

                  For more information on converting existing Docker images to Apptainer/Singularity images, see https://apptainer.org/docs/user/main/docker_and_oci.html.

                  We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

                  "}, {"location": "apptainer/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

                  Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

                  cp /apps/gent/tutorials/Singularity/CentOS7_EasyBuild.img $VSC_SCRATCH/\n

                  Create a job script like:

                  #!/bin/sh\n\n#PBS -o apptainer.output\n#PBS -e apptainer.error\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=12:00:00\n\n\napptainer exec $VSC_SCRATCH/CentOS7_EasyBuild.img ~/my_script.sh\n

                  Create an example myscript.sh:

                  #!/bin/bash\n\n# prime factors\nfactor 1234567\n
                  "}, {"location": "apptainer/#tensorflow-example", "title": "Tensorflow example", "text": "

                  We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Apptainer/Singularity image yourself

                  Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

                  cp /apps/gent/tutorials/Singularity/Ubuntu14.04_tensorflow.img $VSC_SCRATCH/\n
                  #!/bin/sh\n#\n#\n#PBS -o tensorflow.output\n#PBS -e tensorflow.error\n#PBS -l nodes=1:ppn=4\n#PBS -l walltime=12:00:00\n#\n\napptainer exec $VSC_SCRATCH/Ubuntu14.04_tensorflow.img python ~/linear_regression.py\n

                  You can download linear_regression.py from the official Tensorflow repository.

                  "}, {"location": "apptainer/#mpi-example", "title": "MPI example", "text": "

                  It is also possible to execute MPI jobs within a container, but the following requirements apply:

                  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

                  • Use modules within the container (install the environment-modules or lmod package in your container)

                  • Load the required module(s) before apptainer execution.

                  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

                  Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

                  cp /apps/gent/tutorials/Singularity/Debian8_UGentMPI.img $VSC_SCRATCH/\n

                  For example to compile an MPI example:

                  module load intel\napptainer shell $VSC_SCRATCH/Debian8_UGentMPI.img\nexport LANG=C\nexport C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH\nmpiicc ompi/examples/ring_c.c -o ring_debian\nexit\n

                  Example MPI job script:

                  #!/bin/sh\n\n#PBS -N mpi\n#PBS -o apptainermpi.output\n#PBS -e apptainermpi.error\n#PBS -l nodes=2:ppn=15\n#PBS -l walltime=12:00:00\n\nmodule load intel vsc-mympirun\nmympirun --impi-fallback apptainer exec $VSC_SCRATCH/Debian8_UGentMPI.img ~/ring_debian\n
                  "}, {"location": "best_practices/", "title": "Best Practices", "text": ""}, {"location": "best_practices/#sec:general-best-practices", "title": "General Best Practices", "text": "
                  1. Before starting, you should always check:

                    • Are there any errors in the script?

                    • Are the required modules loaded?

                    • Is the correct executable used?

                  2. Check your computer requirements upfront, and request the correct resources in your batch job script.

                    • Number of requested cores

                    • Amount of requested memory

                    • Requested network type

                  3. Check your jobs at runtime. You could login to the node and check the proper execution of your jobs with, e.g., top or vmstat. Alternatively you could run an interactive job (qsub -I).

                  4. Try to benchmark the software for scaling issues when using MPI or for I/O issues.

                  5. Use the scratch file system ($VSC_SCRATCH_NODE, which is mapped to the local /tmp) whenever possible. Local disk I/O is always much faster as it does not have to use the network.

                  6. When your job starts, it will log on to the compute node(s) and start executing the commands in the job script. It will start in your home directory $VSC_HOME, so going to the current directory with cd $PBS_O_WORKDIR is the first thing which needs to be done. You will have your default environment, so don't forget to load the software with module load.

                  7. Submit your job and wait (be patient) ...

                  8. Submit small jobs by grouping them together. See chapter Multi-job submission for how this is done.

                  9. The runtime is limited by the maximum walltime of the queues.

                  10. Requesting many processors could imply long queue times. It's advised to only request the resources you'll be able to use.

                  11. For all multi-node jobs, please use a cluster that has an \"InfiniBand\" interconnect network.

                  12. And above all, do not hesitate to contact the HPC staff at hpc@ugent.be. We're here to help you.

                  "}, {"location": "compiling_your_software/", "title": "Compiling and testing your software on the HPC", "text": "

                  All nodes in the HPC cluster are running the \"RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty)\" Operating system, which is a specific version of Red Hat Enterprise Linux. This means that all the software programs (executable) that the end-user wants to run on the HPC first must be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). It also means that you first have to install all the required external software packages on the HPC.

                  Most commonly used compilers are already pre-installed on the HPC and can be used straight away. Also, many popular external software packages, which are regularly used in the scientific community, are also pre-installed.

                  "}, {"location": "compiling_your_software/#check-the-pre-installed-software-on-the-hpc", "title": "Check the pre-installed software on the HPC", "text": "

                  In order to check all the available modules and their version numbers, which are pre-installed on the HPC enter:

                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

                  When your required application is not available on the HPC please contact any HPC member. Be aware of potential \"License Costs\". \"Open Source\" software is often preferred.

                  "}, {"location": "compiling_your_software/#porting-your-code", "title": "Porting your code", "text": "

                  To port a software-program is to translate it from the operating system in which it was developed (e.g., Windows 7) to another operating system (e.g., Red Hat Enterprise Linux on our HPC) so that it can be used there. Porting implies some degree of effort, but not nearly as much as redeveloping the program in the new environment. It all depends on how \"portable\" you wrote your code.

                  In the simplest case the file or files may simply be copied from one machine to the other. However, in many cases the software is installed on a computer in a way, which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different directories.

                  In some cases software, usually described as \"portable software\" is specifically designed to run on different computers with compatible operating systems and processors without any machine-dependent installation; it is sufficient to transfer specified directories and their contents. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g., the registry on machines running MS Windows).

                  Software, which is not portable in this sense, will have to be transferred with modifications to support the environment on the destination machine.

                  Whilst programming, it would be wise to stick to certain standards (e.g., ISO/ANSI/POSIX). This will ease the porting of your code to other platforms.

                  Porting your code to the RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty) platform is the responsibility of the end-user.

                  "}, {"location": "compiling_your_software/#compiling-and-building-on-the-hpc", "title": "Compiling and building on the HPC", "text": "

                  Compiling refers to the process of translating code written in some programming language, e.g., Fortran, C, or C++, to machine code. Building is similar, but includes gluing together the machine code resulting from different source files into an executable (or library). The text below guides you through some basic problems typical for small software projects. For larger projects it is more appropriate to use makefiles or even an advanced build system like CMake.

                  All the HPC nodes run the same version of the Operating System, i.e. RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). So, it is sufficient to compile your program on any compute node. Once you have generated an executable with your compiler, this executable should be able to run on any other compute-node.

                  A typical process looks like:

                  1. Copy your software to the login-node of the HPC

                  2. Start an interactive session on a compute node;

                  3. Compile it;

                  4. Test it locally;

                  5. Generate your job scripts;

                  6. Test it on the HPC

                  7. Run it (in parallel);

                  We assume you've copied your software to the HPC. The next step is to request your private compute node.

                  $ qsub -I\nqsub: waiting for job 123456 to start\n
                  "}, {"location": "compiling_your_software/#compiling-a-sequential-program-in-c", "title": "Compiling a sequential program in C", "text": "

                  Go to the examples for chapter Compiling and testing your software on the HPC and load the foss module:

                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\nmodule load foss\n

                  We now list the directory and explore the contents of the \"hello.c\" program:

                  $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

                  hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Print 500 numbers, whilst waiting 1 second in between\n */\n#include \"stdio.h\"\nint main( int argc, char *argv[] )\n{\nint i;\nfor (i=0; i<500; i++)\n{\nprintf(\"Hello #%d\\n\", i);\nfflush(stdout);\nsleep(1);\n}\n}\n

                  The \"hello.c\" program is a simple source file, written in C. It'll print 500 times \"Hello #<num>\", and waits one second between 2 printouts.

                  We first need to compile this C-file into an executable with the gcc-compiler.

                  First, check the command line options for \"gcc\" (GNU C-Compiler), then we compile. the O2 option enables a moderate level of optimization when compiling the code. It instructs the compiler to optimize the code for better performance without significantly increasing compilation time. Finally, list the contents of the directory again:

                  $ gcc -help\n$ gcc -O2 -o hello hello.c\n$ ls -l\ntotal 512\n-rwxrwxr-x 1 vsc40000 7116 Sep 16 11:43 hello*\n-rw-r--r-- 1 vsc40000  214 Sep 16 09:42 hello.c\n-rwxr-xr-x 1 vsc40000  130 Sep 16 11:39 hello.pbs*\n

                  A new file \"hello\" has been created. Note that this file has \"execute\" rights, i.e., it is an executable. More often than not, calling gcc -- or any other compiler for that matter -- will provide you with a list of errors and warnings referring to mistakes the programmer made, such as typos, syntax errors. You will have to correct them first in order to make the code compile. Warnings pinpoint less crucial issues that may relate to performance problems, using unsafe or obsolete language features, etc. It is good practice to remove all warnings from a compilation process, even if they seem unimportant so that a code change that produces a warning does not go unnoticed.

                  Let's test this program on the local compute node, which is at your disposal after the qsub --I command:

                  $ ./hello\nHello #0\nHello #1\nHello #2\nHello #3\nHello #4\n...\n

                  It seems to work, now run it on the HPC

                  qsub hello.pbs\n

                  "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-cmpi", "title": "Compiling a parallel program in C/MPI", "text": "
                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

                  List the directory and explore the contents of the \"mpihello.c\" program:

                  $ ls -l\ntotal 512\ntotal 512\n-rw-r--r-- 1 vsc40000 214 Sep 16 09:42 hello.c\n-rw-r--r-- 1 vsc40000 130 Sep 16 11:39 hello.pbs*\n-rw-r--r-- 1 vsc40000 359 Sep 16 13:55 mpihello.c\n-rw-r--r-- 1 vsc40000 304 Sep 16 13:55 mpihello.pbs\n

                  mpihello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Example program, to compile with MPI\n */\n#include <stdio.h>\n#include <mpi.h>\n\nmain(int argc, char **argv)\n{\nint node, i, j;\nfloat f;\n\nMPI_Init(&argc,&argv);\nMPI_Comm_rank(MPI_COMM_WORLD, &node);\n\nprintf(\"Hello World from Node %d.\\n\", node);\nfor (i=0; i<=100000; i++)\nf=i*2.718281828*i+i+i*3.141592654;\n\nMPI_Finalize();\n}\n

                  The \"mpi_hello.c\" program is a simple source file, written in C with MPI library calls.

                  Then, check the command line options for \"mpicc\" (GNU C-Compiler with MPI extensions), then we compile and list the contents of the directory again:

                  mpicc --help\nmpicc -o mpihello mpihello.c\nls -l\n

                  A new file \"hello\" has been created. Note that this program has \"execute\" rights.

                  Let's test this program on the \"login\" node first:

                  $ ./mpihello\nHello World from Node 0.\n

                  It seems to work, now run it on the HPC.

                  qsub mpihello.pbs\n
                  "}, {"location": "compiling_your_software/#compiling-a-parallel-program-in-intel-parallel-studio-cluster-edition", "title": "Compiling a parallel program in Intel Parallel Studio Cluster Edition", "text": "

                  We will now compile the same program, but using the Intel Parallel Studio Cluster Edition compilers. We stay in the examples directory for this chapter:

                  cd ~/examples/Compiling-and-testing-your-software-on-the-HPC\n

                  We will compile this C/MPI -file into an executable with the Intel Parallel Studio Cluster Edition. First, clear the modules (purge) and then load the latest \"intel\" module:

                  module purge\nmodule load intel\n

                  Then, compile and list the contents of the directory again. The Intel equivalent of mpicc is mpiicc.

                  mpiicc -o mpihello mpihello.c\nls -l\n

                  Note that the old \"mpihello\" file has been overwritten. Let's test this program on the \"login\" node first:

                  $ ./mpihello\nHello World from Node 0.\n

                  It seems to work, now run it on the HPC.

                  qsub mpihello.pbs\n

                  Note: The AUGent only has a license for the Intel Parallel Studio Cluster Edition for a fixed number of users. As such, it might happen that you have to wait a few minutes before a floating license becomes available for your use.

                  Note: The Intel Parallel Studio Cluster Edition contains equivalent compilers for all GNU compilers. Hereafter the overview for C, C++ and Fortran compilers.

                  Sequential Program Parallel Program (with MPI) GNU Intel GNU Intel C gcc icc mpicc mpiicc C++ g++ icpc mpicxx mpiicpc Fortran gfortran ifort mpif90 mpiifort"}, {"location": "connecting/", "title": "Connecting to the HPC infrastructure", "text": "

                  Before you can really start using the HPC clusters, there are several things you need to do or know:

                  1. You need to log on to the cluster using an SSH client to one of the login nodes or by using the HPC web portal. This will give you command-line access. A standard web browser like Firefox or Chrome for the web portal will suffice.

                  2. Before you can do some work, you'll have to transfer the files that you need from your desktop computer to the cluster. At the end of a job, you might want to transfer some files back.

                  3. Optionally, if you wish to use programs with a graphical user interface, you will need an X-server on your client system and log in to the login nodes with X-forwarding enabled.

                  4. Often several versions of software packages and libraries are installed, so you need to select the ones you need. To manage different versions efficiently, the VSC clusters use so-called modules, so you will need to select and load the modules that you need.

                  "}, {"location": "connecting/#connection-restrictions", "title": "Connection restrictions", "text": "

                  Since March 20th 2020, restrictions are in place that limit from where you can connect to the VSC HPC infrastructure, in response to security incidents involving several European HPC centres.

                  VSC login nodes are only directly accessible from within university networks, and from (most) Belgian commercial internet providers.

                  All other IP domains are blocked by default. If you are connecting from an IP address that is not allowed direct access, you have the following options to get access to VSC login nodes:

                  • Use an VPN connection to connect to UGent the network (recommended). See https://helpdesk.ugent.be/vpn/en/ for more information.

                  • Whitelist your IP address automatically by accessing https://firewall.vscentrum.be and log in with your UGent account.

                    • While this web connection is active new SSH sessions can be started.

                    • Active SSH sessions will remain active even when this web page is closed.

                  • Contact your HPC support team (via hpc@ugent.be) and ask them to whitelist your IP range (e.g., for industry access, automated processes).

                  Trying to establish an SSH connection from an IP address that does not adhere to these restrictions will result in an immediate failure to connect, with an error message like:

                  ssh_exchange_identification: read: Connection reset by peer\n
                  "}, {"location": "connecting/#first-time-connection-to-the-hpc-infrastructure", "title": "First Time connection to the HPC infrastructure", "text": "

                  The remaining content in this chapter is primarily focused for people utilizing a terminal with SSH. If you are instead using the web portal, the corresponding chapter might be more helpful: Using the HPC-UGent web portal.

                  If you have any issues connecting to the HPC after you've followed these steps, see Issues connecting to login node to troubleshoot.

                  "}, {"location": "connecting/#connect", "title": "Connect", "text": "

                  Open up a terminal and enter the following command to connect to the HPC. You can open a terminal by navigation to Applications and then Utilities in the finder and open Terminal.app, or enter Terminal in Spotlight Search.

                  ssh vsc40000@login.hpc.ugent.be\n

                  Here, user vsc40000 wants to make a connection to the \"hpcugent\" cluster at UGent via the login node \"login.hpc.ugent.be\", so replace vsc40000 with your own VSC id in the above command.

                  The first time you make a connection to the login node, you will be asked to verify the authenticity of the login node. Please check Warning message when first connecting to new host on how to do this.

                  A possible error message you can get if you previously saved your private key somewhere else than the default location ($HOME/.ssh/id_rsa):

                  Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\n

                  In this case, use the -i option for the ssh command to specify the location of your private key. For example:

                  ssh -i /home/example/my_keys\n

                  Congratulations, you're on the HPC infrastructure now! To find out where you have landed you can print the current working directory:

                  $ pwd\n/user/home/gent/vsc400/vsc40000\n

                  Your new private home directory is \"/user/home/gent/vsc400/vsc40000\". Here you can create your own subdirectory structure, copy and prepare your applications, compile and test them and submit your jobs on the HPC.

                  $ cd /apps/gent/tutorials\n$ ls\nIntro-HPC/\n

                  This directory currently contains all training material for the Introduction to the HPC. More relevant training material to work with the HPC can always be added later in this directory.

                  You can now explore the content of this directory with the \"ls --l\" (lists long) and the \"cd\" (change directory) commands:

                  As we are interested in the use of the HPC, move further to Intro-HPC and explore the contents up to 2 levels deep:

                  $ cd Intro-HPC\n$ tree -L 2\n.\n'-- examples\n    |-- Compiling-and-testing-your-software-on-the-HPC\n    |-- Fine-tuning-Job-Specifications\n    |-- Multi-core-jobs-Parallel-Computing\n    |-- Multi-job-submission\n    |-- Program-examples\n    |-- Running-batch-jobs\n    |-- Running-jobs-with-input\n    |-- Running-jobs-with-input-output-data\n    |-- example.pbs\n    '-- example.sh\n9 directories, 5 files\n

                  This directory contains:

                  1. This HPC Tutorial (in either a Mac, Linux or Windows version).

                  2. An examples subdirectory, containing all the examples that you need in this Tutorial, as well as examples that might be useful for your specific applications.

                  cd examples\n

                  Tip

                  Typing cd ex followed by Tab (the Tab-key) will generate the cd examples command. Command-line completion (also tab completion) is a common feature of the bash command line interpreter, in which the program automatically fills in partially typed commands.

                  Tip

                  For more exhaustive tutorials about Linux usage, see Appendix Useful Linux Commands

                  The first action is to copy the contents of the HPC examples directory to your home directory, so that you have your own personal copy and that you can start using the examples. The \"-r\" option of the copy command will also copy the contents of the sub-directories \"recursively\".

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  Go to your home directory, check your own private examples directory, ...\u00a0and start working.

                  cd\nls -l\n

                  Upon connecting you will see a login message containing your last login time stamp and a basic overview of the current cluster utilisation.

                  Last login: Thu Mar 18 13:15:09 2021 from gligarha02.gastly.os\n\n STEVIN HPC-UGent infrastructure status on Mon, 19 Feb 2024 10:00:01\n      cluster         - full - free -  part - total - running - queued\n                        nodes  nodes   free   nodes   jobs      jobs\n -------------------------------------------------------------------------\n           skitty          39      0     26      68      1839     5588\n           joltik           6      0      1      10        29       18\n            doduo          22      0     75     128      1397    11933\n         accelgor           4      3      2       9        18        1\n          donphan           0      0     16      16        16       13\n          gallade           2      0      5      16        19      136\n\n\nFor a full view of the current loads and queues see:\nhttps://hpc.ugent.be/clusterstate/\nUpdates on current system status and planned maintenance can be found on https://www.ugent.be/hpc/en/infrastructure/status\n

                  You can exit the connection at anytime by entering:

                  $ exit\nlogout\nConnection to login.hpc.ugent.be closed.\n

                  tip: Setting your Language right

                  You may encounter a warning message similar to the following one during connecting:

                  perl: warning: Setting locale failed.\nperl: warning: Please check that your locale settings:\nLANGUAGE = (unset),\nLC_ALL = (unset),\nLC_CTYPE = \"UTF-8\",\nLANG = (unset)\n    are supported and installed on your system.\nperl: warning: Falling back to the standard locale (\"C\").\n
                  or any other error message complaining about the locale.

                  This means that the correct \"locale\" has not yet been properly specified on your local machine. Try:

                  LANG=\nLC_COLLATE=\"C\"\nLC_CTYPE=\"UTF-8\"\nLC_MESSAGES=\"C\"\nLC_MONETARY=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_ALL=\n

                  A locale is a set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language identifier and a region identifier.

                  Note

                  If you try to set a non-supported locale, then it will be automatically set to the default. Currently the default is en_US.UFT-8 or en_US, depending on whether your originally (non-supported) locale was UTF-8 or not.

                  Open the .bashrc on your local machine with your favourite editor and add the following lines:

                  $ nano ~/.bashrc\n...\nexport LANGUAGE=\"en_US.UTF-8\"\nexport LC_ALL=\"en_US.UTF-8\"\nexport LC_CTYPE=\"en_US.UTF-8\"\nexport LANG=\"en_US.UTF-8\"\n...\n

                  tip: vi

                  To start entering text in vi: move to the place you want to start entering text with the arrow keys and type \"i\" to switch to insert mode. You can easily exit vi by entering: \"ESC :wq\" To exit vi without saving your changes, enter \"ESC:q!\"

                  or alternatively (if you are not comfortable with the Linux editors), again on your local machine:

                  echo \"export LANGUAGE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_ALL=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LC_CTYPE=\\\"en_US.UTF-8\\\"\" >> ~/.profile\necho \"export LANG=\\\"en_US.UTF-8\\\"\" >> ~/.profile\n

                  You can now log out, open a new terminal/shell on your local machine and reconnect to the login node, and you should not get these warnings anymore.

                  "}, {"location": "connecting/#transfer-files-tofrom-the-hpc", "title": "Transfer Files to/from the HPC", "text": "

                  Before you can do some work, you'll have to transfer the files you need from your desktop or department to the cluster. At the end of a job, you might want to transfer some files back. The preferred way to transfer files is by using an scp or sftp via the secure OpenSSH protocol. macOS ships with an implementation of OpenSSH, so you don't need to install any third-party software to use it. Just open a terminal window and jump in!

                  "}, {"location": "connecting/#using-scp", "title": "Using scp", "text": "

                  Secure copy or SCP is a tool (command) for securely transferring files between a local host (= your computer) and a remote host (the HPC). It is based on the Secure Shell (SSH) protocol. The scp command is the equivalent of the cp (i.e., copy) command, but can copy files to or from remote machines.

                  It's easier to copy files directly to $VSC_DATA and $VSC_SCRATCH if you have symlinks to them in your home directory. See the chapter titled \"Uploading/downloading/editing files\", section \"Symlinks for data/scratch\" in the intro to Linux for how to do this.

                  Open an additional terminal window and check that you're working on your local machine.

                  $ hostname\n<local-machine-name>\n

                  If you're still using the terminal that is connected to the HPC, close the connection by typing \"exit\" in the terminal window.

                  For example, we will copy the (local) file \"localfile.txt\" to your home directory on the HPC cluster. We first generate a small dummy \"localfile.txt\", which contains the word \"Hello\". Use your own VSC account, which is something like \"vsc40000\". Don't forget the colon (:) at the end: if you forget it, it will just create a file named vsc40000@login.hpc.ugent.be on your local filesystem. You can even specify where to save the file on the remote filesystem by putting a path after the colon.

                  $ echo \"Hello\" > localfile.txt\n$ ls -l \n...\n-rw-r--r-- 1 user  staff   6 Sep 18 09:37 localfile.txt\n$ scp localfile.txt vsc40000@login.hpc.ugent.be:\nlocalfile.txt     100%   6     0.0KB/s     00:00\n

                  Connect to the HPC via another terminal, print the working directory (to make sure you're in the home directory) and check whether the file has arrived:

                  $ pwd\n/user/home/gent/vsc400/vsc40000\n$ ls -l \ntotal 1536\ndrwxrwxr-x 2\ndrwxrwxr-x 2\ndrwxrwxr-x 10\n-rw-r--r-- 1\n$ cat localfile.txt\nHello\n

                  The scp command can also be used to copy files from the cluster to your local machine. Let us copy the remote file \"intro-HPC-macOS-Gent.pdf\" from your \"docs\" subdirectory on the cluster to your local computer.

                  First, we will confirm that the file is indeed in the \"docs\" subdirectory. In the terminal on the login node, enter:

                  $ cd ~/docs\n$ ls -l\ntotal 1536\n-rw-r--r-- 1 vsc40000 Sep 11 09:53 intro-HPC-macOS-Gent.pdf\n

                  Now we will copy the file to the local machine. On the terminal on your own local computer, enter:

                  $ scp vsc40000@login.hpc.ugent.be:./docs/intro-HPC-macOS-Gent.pdf .\nintro-HPC-macOS-Gent.pdf 100% 725KB 724.6KB/s 00:01\n$ ls -l\ntotal 899\n-rw-r--r-- 1 user staff 741995 Sep 18 09:53\n-rw-r--r-- 1 user staff      6 Sep 18 09:37 localfile.txt\n

                  The file has been copied from the HPC to your local computer.

                  It's also possible to copy entire directories (and their contents) with the -r flag. For example, if we want to copy the local directory dataset to $VSC_SCRATCH, we can use the following command (assuming you've created the scratch symlink):

                  scp -r dataset vsc40000@login.hpc.ugent.be:scratch\n

                  If you don't use the -r option to copy a directory, you will run into the following error:

                  $ scp dataset vsc40000@login.hpc.ugent.be:scratch\ndataset: not a regular file\n
                  "}, {"location": "connecting/#using-sftp", "title": "Using sftp", "text": "

                  The SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer and file management functionalities over any reliable data stream. It was designed as an extension of the Secure Shell protocol (SSH) version 2.0. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

                  The sftp is an equivalent of the ftp command, with the difference that it uses the secure ssh protocol to connect to the clusters.

                  One easy way of starting a sftp session is

                  sftp vsc40000@login.hpc.ugent.be\n

                  Typical and popular commands inside an sftp session are:

                  cd ~/exmples/fibo Move to the examples/fibo subdirectory on the (i.e., the HPC remote machine) ls Get a list of the files in the current directory on the HPC. get fibo.py Copy the file \"fibo.py\" from the HPC get tutorial/HPC.pdf Copy the file \"HPC.pdf\" from the HPC, which is in the \"tutorial\" subdirectory. lcd test Move to the \"test\" subdirectory on your local machine. lcd .. Move up one level in the local directory. lls Get local directory listing. put test.py Copy the local file test.py to the HPC. put test1.py test2.py Copy the local file test1.py to the and rename it to test2.py. bye Quit the sftp session mget *.cc Copy all the remote files with extension \".cc\" to the local directory. mput *.h Copy all the local files with extension \".h\" to the HPC."}, {"location": "connecting/#using-a-gui-cyberduck", "title": "Using a GUI (Cyberduck)", "text": "

                  Cyberduck is a graphical alternative to the scp command. It can be installed from https://cyberduck.io.

                  This is the one-time setup you will need to do before connecting:

                  1. After starting Cyberduck, the Bookmark tab will show up. To add a new bookmark, click on the \"+\" sign on the bottom left of the window. A new window will open.

                  2. In the drop-down menu on top, select \"SFTP (SSH File Transfer Protocol)\".

                  3. In the \"Server\" field, type in login.hpc.ugent.be. In the \"Username\" field, type in your VSC account id (this looks like vsc40000).

                  4. Select the location of your SSH private key in the \"SSH Private Key\" field.

                  5. Finally, type in a name for the bookmark in the \"Nickname\" field and close the window by pressing on the red circle in the top left corner of the window.

                  To open the connection, click on the \"Bookmarks\" icon (which resembles an open book) and double-click on the bookmark you just created.

                  "}, {"location": "connecting/#fast-file-transfer-for-large-datasets", "title": "Fast file transfer for large datasets", "text": "

                  See the section on rsync in chapter 5 of the Linux intro manual.

                  "}, {"location": "connecting/#changing-login-nodes", "title": "Changing login nodes", "text": "

                  It can be useful to have control over which login node you are on. However, when you connect to the HPC (High-Performance Computing) system, you are directed to a random login node, which might not be the one where you already have an active session. To address this, there is a way to manually switch your active login node.

                  For instance, if you want to switch to the login node named gligar07.gastly.os, you can use the following command while you are connected to the gligar08.gastly.os login node on the HPC:

                  ssh gligar07.gastly.os\n
                  This is also possible the other way around.

                  If you want to find out which login host you are connected to, you can use the hostname command.

                  $ hostname\ngligar07.gastly.os\n$ ssh gligar08.gastly.os\n\n$ hostname\ngligar08.gastly.os\n

                  Rather than always starting a new session on the HPC, you can also use a terminal multiplexer like screen or tmux. These can make sessions that 'survives' across disconnects. You can find more information on how to use these tools here (or on other online sources):

                  • screen
                  • tmux
                  "}, {"location": "crontab/", "title": "Cron scripts", "text": ""}, {"location": "crontab/#cron-scripts-configuration", "title": "Cron scripts configuration", "text": "

                  It is possible to run automated cron scripts as regular user on the Ugent login nodes. Due to the high availability setup users should add their cron scripts on the same login node to avoid any cron job script duplication.

                  In order to create a new cron script first login to HPC-UGent login node as usual with your vsc user's account (see section Connecting).

                  Check if any cron script is already set in the current login node with:

                  crontab -l\n

                  At this point you can add/edit (with vi editor) any cron script running the command:

                  crontab -e\n
                  "}, {"location": "crontab/#example-cron-job-script", "title": "Example cron job script", "text": "
                   15 5 * * * ~/runscript.sh >& ~/job.out\n

                  where runscript.sh has these lines in this example:

                  runscript.sh
                  #!/bin/bash\n\nmodule swap cluster/donphan\nexport SLURM_CLUSTERS=\"donphan\"\n/usr/libexec/jobcli/qsub ~/job_scripts/test.sh >& ~/job.out\n

                  In the previous example a cron script was set to be executed every day at 5:15 am. More information about crontab and cron scheduling format at https://www.redhat.com/sysadmin/automate-linux-tasks-cron.

                  Please note that you should login into the same login node to edit your previously generated crontab tasks. If that is not the case you can always jump from one login node to another with:

                  ssh gligar07    # or gligar08\n
                  "}, {"location": "easybuild/", "title": "Easybuild", "text": ""}, {"location": "easybuild/#what-is-easybuild", "title": "What is Easybuild?", "text": "

                  You can use EasyBuild to build and install supported software in your own VSC account, rather than requesting a central installation by the HPC support team.

                  EasyBuild (https://easybuilders.github.io/easybuild) is the software build and installation framework that was created by the HPC-UGent team, and has recently been picked up by HPC sites around the world. It allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.

                  "}, {"location": "easybuild/#when-should-i-use-easybuild", "title": "When should I use Easybuild?", "text": "

                  For general software installation requests, please see I want to use software that is not available on the clusters yet. However, there might be reasons to install the software yourself:

                  • applying custom patches to the software that only you or your group are using

                  • evaluating new software versions prior to requesting a central software installation

                  • installing (very) old software versions that are no longer eligible for central installation (on new clusters)

                  "}, {"location": "easybuild/#configuring-easybuild", "title": "Configuring EasyBuild", "text": "

                  Before you use EasyBuild, you need to configure it:

                  "}, {"location": "easybuild/#path-to-sources", "title": "Path to sources", "text": "

                  This is where EasyBuild can find software sources:

                  EASYBUILD_SOURCEPATH=$VSC_DATA/easybuild/sources:/apps/gent/source\n
                  • the first directory $VSC_DATA/easybuild/sources is where EasyBuild will (try to) automatically download sources if they're not available yet

                  • /apps/gent/source is the central \"cache\" for already downloaded sources, and will be considered by EasyBuild before downloading anything

                  "}, {"location": "easybuild/#build-directory", "title": "Build directory", "text": "

                  This directory is where EasyBuild will build software in. To have good performance, this needs to be on a fast filesystem.

                  export EASYBUILD_BUILDPATH=${TMPDIR:-/tmp/$USER}\n

                  On cluster nodes, you can use the fast, in-memory /dev/shm/$USER location as a build directory.

                  "}, {"location": "easybuild/#software-install-location", "title": "Software install location", "text": "

                  This is where EasyBuild will install the software (and accompanying modules) to.

                  For example, to let it use $VSC_DATA/easybuild, use:

                  export EASYBUILD_INSTALLPATH=$VSC_DATA/easybuild/$VSC_OS_LOCAL/$VSC_ARCH_LOCAL$VSC_ARCH_SUFFIX\n

                  Using the $VSC_OS_LOCAL, $VSC_ARCH and $VSC_ARCH_SUFFIX environment variables ensures that your install software to a location that is specific to the cluster you are building for.

                  Make sure you do not build software on the login nodes, since the loaded cluster module determines the location of the installed software. Software built on the login nodes may not work on the cluster you want to use the software on (see also Running software that is incompatible with host).

                  To share custom software installations with members of your VO, replace $VSC_DATA with $VSC_DATA_VO in the example above.

                  "}, {"location": "easybuild/#using-easybuild", "title": "Using EasyBuild", "text": "

                  Before using EasyBuild, you first need to load the EasyBuild module. We don't specify a version here (this is an exception, for most other modules you should see Using explicit version numbers) because newer versions might include important bug fixes.

                  module load EasyBuild\n
                  "}, {"location": "easybuild/#installing-supported-software", "title": "Installing supported software", "text": "

                  EasyBuild provides a large collection of readily available software versions, combined with a particular toolchain version. Use the --search (or -S) functionality to see which different 'easyconfigs' (build recipes, see http://easybuild.readthedocs.org/en/latest/Concepts_and_Terminology.html#easyconfig-files) are available:

                  $ eb -S example-1.2\nCFGS1=/apps/gent/CO7/sandybridge/software/EasyBuild/3.6.2/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.2-py2.7.egg/easybuild/easyconfigs\n * $CFGS1/e/example/example-1.2.1-foss-2024a.eb\n * $CFGS1/e/example/example-1.2.3-foss-2024b.eb\n * $CFGS1/e/example/example-1.2.5-intel-2024a.eb\n

                  For readily available easyconfigs, just specify the name of the easyconfig file to build and install the corresponding software package:

                  eb example-1.2.1-foss-2024a.eb --robot\n
                  "}, {"location": "easybuild/#installing-variants-on-supported-software", "title": "Installing variants on supported software", "text": "

                  To install small variants on supported software, e.g., a different software version, or using a different compiler toolchain, use the corresponding --try-X options:

                  To try to install example v1.2.6, based on the easyconfig file for example v1.2.5:

                  eb example-1.2.5-intel-2024a.eb --try-software-version=1.2.6\n

                  To try to install example v1.2.5 with a different compiler toolchain:

                  eb example-1.2.5-intel-2024a.eb --robot --try-toolchain=intel,2024b\n
                  "}, {"location": "easybuild/#install-other-software", "title": "Install other software", "text": "

                  To install other, not yet supported, software, you will need to provide the required easyconfig files yourself. See https://easybuild.readthedocs.org/en/latest/Writing_easyconfig_files.html for more information.

                  "}, {"location": "easybuild/#using-the-installed-modules", "title": "Using the installed modules", "text": "

                  To use the modules you installed with EasyBuild, extend $MODULEPATH to make them accessible for loading:

                  module use $EASYBUILD_INSTALLPATH/modules/all\n

                  It makes sense to put this module use command and all export commands in your .bashrc login script. That way, you don't have to type these commands every time you want to use EasyBuild or you want to load modules generated with EasyBuild. See also the section on .bashrc in the \"Beyond the basics\" chapter of the intro to Linux

                  "}, {"location": "fine_tuning_job_specifications/", "title": "Fine-tuning Job Specifications", "text": "

                  As HPC system administrators, we often observe that the HPC resources are not optimally (or wisely) used. For example, we regularly notice that several cores on a computing node are not utilised, due to the fact that one sequential program uses only one core on the node. Or users run I/O intensive applications on nodes with \"slow\" network connections.

                  Users often tend to run their jobs without specifying specific PBS Job parameters. As such, their job will automatically use the default parameters, which are not necessarily (or rarely) the optimal ones. This can slow down the run time of your application, but also block HPC resources for other users.

                  Specifying the \"optimal\" Job Parameters requires some knowledge of your application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the HPC infrastructure (e.g., what kind of multi-core processors are available, which nodes have InfiniBand).

                  There are plenty of monitoring tools on Linux available to the user, which are useful to analyse your individual application. The HPC environment as a whole often requires different techniques, metrics and time goals, which are not discussed here. We will focus on tools that can help to optimise your Job Specifications.

                  Determining the optimal computer resource specifications can be broken down into different parts. The first is actually determining which metrics are needed and then collecting that data from the hosts. Some of the most commonly tracked metrics are CPU usage, memory consumption, network bandwidth, and disk I/O stats. These provide different indications of how well a system is performing, and may indicate where there are potential problems or performance bottlenecks. Once the data have actually been acquired, the second task is analysing the data and adapting your PBS Job Specifications.

                  Another different task is to monitor the behaviour of an application at run time and detect anomalies or unexpected behaviour. Linux provides a large number of utilities to monitor the performance of its components.

                  This chapter shows you how to measure:

                  1. Walltime
                  2. Memory usage
                  3. CPU usage
                  4. Disk (storage) needs
                  5. Network bottlenecks

                  First, we allocate a compute node and move to our relevant directory:

                  qsub -I\ncd ~/examples/Fine-tuning-Job-Specifications\n
                  "}, {"location": "fine_tuning_job_specifications/#specifying-walltime", "title": "Specifying Walltime", "text": "

                  One of the most important and also easiest parameters to measure is the duration of your program. This information is needed to specify the walltime.

                  The time utility executes and times your application. You can just add the time command in front of your normal command line, including your command line options. After your executable has finished, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute your executable to the standard error stream. The calculated times are reported in seconds.

                  Test the time command:

                  $ time sleep 75\nreal 1m15.005s\nuser 0m0.001s\nsys 0m0.002s\n

                  It is a good practice to correctly estimate and specify the run time (duration) of an application. Of course, a margin of 10% to 20% can be taken to be on the safe side.

                  It is also wise to check the walltime on different compute nodes or to select the \"slowest\" compute node for your walltime tests. Your estimate should be appropriate in case your application will run on the \"slowest\" (oldest) compute nodes.

                  The walltime can be specified in a job scripts as:

                  #PBS -l walltime=3:00:00:00\n

                  or on the command line

                  qsub -l walltime=3:00:00:00\n

                  It is recommended to always specify the walltime for a job.

                  "}, {"location": "fine_tuning_job_specifications/#specifying-memory-requirements", "title": "Specifying memory requirements", "text": "

                  In many situations, it is useful to monitor the amount of memory an application is using. You need this information to determine the characteristics of the required compute node, where that application should run on. Estimating the amount of memory an application will use during execution is often non-trivial, especially when one uses third-party software.

                  "}, {"location": "fine_tuning_job_specifications/#available-memory-on-the-machine", "title": "Available Memory on the machine", "text": "

                  The first point is to be aware of the available free memory in your computer. The \"free\" command displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel. We also use the options \"-m\" to see the results expressed in Mega-Bytes and the \"-t\" option to get totals.

                  $ free -m -t\n                total   used   free  shared  buffers  cached\nMem:            16049   4772  11277       0      107     161\n-/+ buffers/cache:      4503  11546\nSwap:           16002   4185  11816\nTotal:          32052   8957  23094\n

                  Important is to note the total amount of memory available in the machine (i.e., 16 GB in this example) and the amount of used and free memory (i.e., 4.7 GB is used and another 11.2 GB is free here).

                  It is not a good practice to use swap-space for your computational applications. A lot of \"swapping\" can increase the execution time of your application tremendously.

                  On the UGent clusters, there is no swap space available for jobs, you can only use physical memory, even though \"free\" will show swap.

                  "}, {"location": "fine_tuning_job_specifications/#checking-the-memory-consumption", "title": "Checking the memory consumption", "text": "

                  To monitor the memory consumption of a running application, you can use the \"top\" or the \"htop\" command.

                  top

                  provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by memory usage, CPU usage and run time.

                  htop

                  is similar to top, but shows the CPU-utilisation for all the CPUs in the machine and allows to scroll the list vertically and horizontally to see all processes and their full command lines.

                  "}, {"location": "fine_tuning_job_specifications/#pbs_mem", "title": "Setting the memory parameter", "text": "

                  Once you gathered a good idea of the overall memory consumption of your application, you can define it in your job script. It is wise to foresee a margin of about 10%.

                  The maximum amount of physical memory used by the job per node can be specified in a job script as:

                  #PBS -l mem=4gb\n

                  or on the command line

                  qsub -l mem=4gb\n
                  "}, {"location": "fine_tuning_job_specifications/#specifying-processors-requirements", "title": "Specifying processors requirements", "text": "

                  Users are encouraged to fully utilise all the available cores on a certain compute node. Once the required numbers of cores and nodes are decently specified, it is also good practice to monitor the CPU utilisation on these cores and to make sure that all the assigned nodes are working at full load.

                  "}, {"location": "fine_tuning_job_specifications/#number-of-processors", "title": "Number of processors", "text": "

                  The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing work and then to configure your software to nicely fill up all processors on these compute nodes.

                  The /proc/cpuinfo stores info about your CPU architecture like number of CPUs, threads, cores, information about CPU caches, CPU family, model and much more. So, if you want to detect how many cores are available on a specific machine:

                  $ less /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU  E5420  @ 2.50GHz\nstepping        : 10\ncpu MHz         : 2500.088\ncache size      : 6144 KB\n...\n

                  Or if you want to see it in a more readable format, execute:

                  $ grep processor /proc/cpuinfo\nprocessor : 0\nprocessor : 1\nprocessor : 2\nprocessor : 3\nprocessor : 4\nprocessor : 5\nprocessor : 6\nprocessor : 7\n

                  Note

                  Unless you want information of the login nodes, you'll have to issue these commands on one of the workernodes. This is most easily achieved in an interactive job, see the chapter on Running interactive jobs.

                  In order to specify the number of nodes and the number of processors per node in your job script, use:

                  #PBS -l nodes=N:ppn=M\n

                  or with equivalent parameters on the command line

                  qsub -l nodes=N:ppn=M\n

                  This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request.

                  You can also use this statement in your job script:

                  #PBS -l nodes=N:ppn=all\n

                  to request all cores of a node, or

                  #PBS -l nodes=N:ppn=half\n

                  to request half of them.

                  Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.

                  "}, {"location": "fine_tuning_job_specifications/#monitoring-the-cpu-utilisation", "title": "Monitoring the CPU-utilisation", "text": "

                  This could also be monitored with the htop command:

                  htop\n
                  Example output:
                    1  [|||   11.0%]   5  [||     3.0%]     9  [||     3.0%]   13 [       0.0%]\n  2  [|||||100.0%]   6  [       0.0%]     10 [       0.0%]   14 [       0.0%]\n  3  [||     4.9%]   7  [||     9.1%]     11 [       0.0%]   15 [       0.0%]\n  4  [||     1.8%]   8  [       0.0%]     12 [       0.0%]   16 [       0.0%]\n  Mem[|||||||||||||||||59211/64512MB]     Tasks: 323, 932 thr; 2 running\n  Swp[||||||||||||      7943/20479MB]     Load average: 1.48 1.46 1.27\n                                          Uptime: 211 days(!), 22:12:58\n\n  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command\n22350 vsc00000   20   0 1729M 1071M   704 R 98.0  1.7 27:15.59 bwa index\n 7703 root        0 -20 10.1G 1289M 70156 S 11.0  2.0 36h10:11 /usr/lpp/mmfs/bin\n27905 vsc00000   20   0  123M  2800  1556 R  7.0  0.0  0:17.51 htop\n

                  The advantage of htop is that it shows you the cpu utilisation for all processors as well as the details per application. A nice exercise is to start 4 instances of the \"cpu_eat\" program in 4 different terminals, and inspect the cpu utilisation per processor with monitor and htop.

                  If htop reports that your program is taking 75% CPU on a certain processor, it means that 75% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern.

                  "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script", "title": "Fine-tuning your executable and/or job script", "text": "

                  It is good practice to perform a number of run time stress tests, and to check the CPU utilisation of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

                  But how can you maximise?

                  1. Configure your software. (e.g., to exactly use the available amount of processors in a node)
                  2. Develop your parallel program in a smart way.
                  3. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
                  4. Correct your request for CPUs in your job script.
                  "}, {"location": "fine_tuning_job_specifications/#the-system-load", "title": "The system load", "text": "

                  On top of the CPU utilisation, it is also important to check the system load. The system load is a measure of the amount of computational work that a computer system performs.

                  The system load is the number of applications running or waiting to run on the compute node. In a system with for example four CPUs, a load average of 3.61 would indicate that there were, on average, 3.61 processes ready to run, and each one could be scheduled into a CPU.

                  The load averages differ from CPU percentage in two significant ways:

                  1. \"load averages\" measure the trend of processes waiting to be run (and not only an instantaneous snapshot, as does CPU percentage); and
                  2. \"load averages\" include all demand for all resources, e.g., CPU and also I/O and network (and not only how much was active at the time of measurement).
                  "}, {"location": "fine_tuning_job_specifications/#optimal-load", "title": "Optimal load", "text": "

                  What is the \"optimal load\" rule of thumb?

                  The load averages tell us whether our physical CPUs are over- or under-utilised. The point of perfect utilisation, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. Your load should not exceed the number of cores available. E.g., if there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilising its processors perfectly for the last 60 seconds. The \"100% utilisation\" mark is 1.0 on a single-core system, 2.0 on a dual-core, 4.0 on a quad-core, etc. The optimal load shall be between 0.7 and 1.0 per processor.

                  In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more processes are waiting and doing nothing, and the lower they fall below the number of processors, the more untapped CPU capacity there is.

                  Load averages do include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. This means that the optimal number of applications running on a system at the same time, might be more than one per processor.

                  The \"optimal number of applications\" running on one machine at the same time depends on the type of the applications that you are running.

                  1. When you are running computational intensive applications, one application per processor will generate the optimal load.
                  2. For I/O intensive applications (e.g., applications which perform a lot of disk-I/O), a higher number of applications can generate the optimal load. While some applications are reading or writing data on disks, the processors can serve other applications.

                  The optimal number of applications on a machine could be empirically calculated by performing a number of stress tests, whilst checking the highest throughput. There is however no manner in the HPC at the moment to specify the maximum number of applications that shall run per core dynamically. The HPC scheduler will not launch more than one process per core.

                  The manner how the cores are spread out over CPUs does not matter for what regards the load. Two quad-cores perform similar to four dual-cores, and again perform similar to eight single-cores. It's all eight cores for these purposes.

                  "}, {"location": "fine_tuning_job_specifications/#monitoring-the-load", "title": "Monitoring the load", "text": "

                  The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers, which represent the system load during the last one-, five-, and fifteen-minute periods.

                  The uptime command will show us the average load

                  $ uptime\n10:14:05 up 86 days, 12:01, 11 users, load average: 0.60, 0.41, 0.41\n

                  Now, compile and start a few instances of the \"eat_cpu\" program in the background, and check the effect on the load again:

                  $ gcc -O2 eat_cpu.c -o eat_cpu\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ ./eat_cpu&\n$ uptime\n10:14:42 up 86 days, 12:02, 11 users, load average: 2.60, 0.93, 0.58\n
                  You can also read it in the htop command.

                  "}, {"location": "fine_tuning_job_specifications/#fine-tuning-your-executable-andor-job-script_1", "title": "Fine-tuning your executable and/or job script", "text": "

                  It is good practice to perform a number of run time stress tests, and to check the system load of your nodes. We (and all other users of the HPC) would appreciate that you use the maximum of the CPU resources that are assigned to you and make sure that there are no CPUs in your node who are not utilised without reasons.

                  But how can you maximise?

                  1. Profile your software to improve its performance.
                  2. Configure your software (e.g., to exactly use the available amount of processors in a node).
                  3. Develop your parallel program in a smart way, so that it fully utilises the available processors.
                  4. Demand a specific type of compute node (e.g., Harpertown, Westmere), which have a specific number of cores.
                  5. Correct your request for CPUs in your job script.

                  And then check again.

                  "}, {"location": "fine_tuning_job_specifications/#checking-file-sizes-disk-io", "title": "Checking File sizes & Disk I/O", "text": ""}, {"location": "fine_tuning_job_specifications/#monitoring-file-sizes-during-execution", "title": "Monitoring File sizes during execution", "text": "

                  Some programs generate intermediate or output files, the size of which may also be a useful metric.

                  Remember that your available disk space on the HPC online storage is limited, and that you have environment variables which point to these directories available (i.e., $VSC_DATA, $VSC_SCRATCH and $VSC_DATA). On top of those, you can also access some temporary storage (i.e., the /tmp directory) on the compute node, which is defined by the $VSC_SCRATCH_NODE environment variable.

                  It is important to be aware of the sizes of the file that will be generated, as the available disk space for each user is limited. We refer to section How much disk space do I get? on Quotas to check your quota and tools to find which files consumed the \"quota\".

                  Several actions can be taken, to avoid storage problems:

                  1. Be aware of all the files that are generated by your program. Also check out the hidden files.
                  2. Check your quota consumption regularly.
                  3. Clean up your files regularly.
                  4. First work (i.e., read and write) with your big files in the local /tmp directory. Once finished, you can move your files once to the VSC_DATA directories.
                  5. Make sure your programs clean up their temporary files after execution.
                  6. Move your output results to your own computer regularly.
                  7. Anyone can request more disk space to the HPC staff, but you will have to duly justify your request.
                  "}, {"location": "fine_tuning_job_specifications/#specifying-network-requirements", "title": "Specifying network requirements", "text": "

                  Users can examine their network activities with the htop command. When your processors are 100% busy, but you see a lot of red bars and only limited green bars in the htop screen, it is mostly an indication that they lose a lot of time with inter-process communication.

                  Whenever your application utilises a lot of inter-process communication (as is the case in most parallel programs), we strongly recommend to request nodes with an \"InfiniBand\" network. The InfiniBand is a specialised high bandwidth, low latency network that enables large parallel jobs to run as efficiently as possible.

                  The parameter to add in your job script would be:

                  #PBS -l ib\n

                  If for some other reasons, a user is fine with the gigabit Ethernet network, he can specify:

                  #PBS -l gbe\n
                  "}, {"location": "getting_started/", "title": "Getting Started", "text": "

                  Welcome to the \"Getting Started\" guide. This chapter will lead you through the initial steps of logging into the HPC-UGent infrastructure and submitting your very first job. We'll also walk you through the process step by step using a practical example.

                  In addition to this chapter, you might find the recording of the Introduction to HPC-UGent training session to be a useful resource.

                  Before proceeding, read the introduction to HPC to gain an understanding of the HPC-UGent infrastructure and related terminology.

                  "}, {"location": "getting_started/#getting-access", "title": "Getting Access", "text": "

                  To get access to the HPC-UGent infrastructure, visit Getting an HPC Account.

                  If you have not used Linux before, now would be a good time to follow our Linux Tutorial.

                  "}, {"location": "getting_started/#a-typical-workflow-looks-like-this", "title": "A typical workflow looks like this:", "text": "
                  1. Connect to the login nodes
                  2. Transfer your files to the HPC-UGent infrastructure
                  3. Optional: compile your code and test it
                  4. Create a job script and submit your job
                  5. Wait for job to be executed
                  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

                  We will walk through an illustrative workload to get you started. In this example, our objective is to train a deep learning model for recognizing hand-written digits (MNIST dataset) using TensorFlow; see the example scripts.

                  "}, {"location": "getting_started/#getting-connected", "title": "Getting Connected", "text": "

                  There are two options to connect

                  • Using a terminal to connect via SSH (for power users) (see First Time connection to the HPC-UGent infrastructure)
                  • Using the web portal

                  Considering your operating system is macOS, it should be easy to make use of the ssh command in a terminal, but the web portal will work too.

                  The web portal offers a convenient way to upload files and gain shell access to the HPC-UGent infrastructure from a standard web browser (no software installation or configuration required).

                  See shell access when using the web portal, or connection to the HPC-UGent infrastructure when using a terminal.

                  Make sure you can get to a shell access to the HPC-UGent infrastructure before proceeding with the next steps.

                  Info

                  When having problems see the connection issues section on the troubleshooting page.

                  "}, {"location": "getting_started/#transfer-your-files", "title": "Transfer your files", "text": "

                  Now that you can login, it is time to transfer files from your local computer to your home directory on the HPC-UGent infrastructure.

                  Download tensorflow_mnist.py and run.sh example scripts to your computer (from here).

                  On your local machine you can run:

                  curl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/tensorflow_mnist.py\ncurl -OL https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist/run.sh\n

                  Using the scp command, the files can be copied from your local host to your home directory (~) on the remote host (HPC).

                  scp tensorflow_mnist.py run.sh vsc40000login.hpc.ugent.be:~\n

                  ssh  vsc40000@login.hpc.ugent.be\n

                  User your own VSC account id

                  Replace vsc40000 with your VSC account id (see https://account.vscentrum.be)

                  Info

                  For more information about transfering files or scp, see tranfer files from/to hpc.

                  When running ls in your session on the HPC-UGent infrastructure, you should see the two files listed in your home directory (~):

                  $ ls ~\nrun.sh tensorflow_mnist.py\n

                  When you do not see these files, make sure you uploaded the files to your home directory.

                  "}, {"location": "getting_started/#submitting-a-job", "title": "Submitting a job", "text": "

                  Jobs are submitted and executed using job scripts. In our case run.sh can be used as a (very minimal) job script.

                  A job script is a shell script, a text file that specifies the resources, the software that is used (via module load statements), and the steps that should be executed to run the calculation.

                  Our job script looks like this:

                  run.sh

                  #!/bin/bash\n\nmodule load TensorFlow/2.11.0-foss-2022a\n\npython tensorflow_mnist.py\n
                  As you can see this job script will run the Python script named tensorflow_mnist.py.

                  The jobs you submit are per default executed on cluser/doduo, you can swap to another cluster by issuing the following command.

                  module swap cluster/donphan\n

                  Tip

                  When submitting jobs with limited amount of resources, it is recommended to use the debug/interactive cluster: donphan.

                  To get a list of all clusters and their hardware, see https://www.ugent.be/hpc/en/infrastructure.

                  This job script can now be submitted to the cluster's job system for execution, using the qsub (queue submit) command:

                  $ qsub run.sh\n123456\n

                  This command returns a job identifier (123456) on the HPC cluster. This is a unique identifier for the job which can be used to monitor and manage your job.

                  Make sure you understand what the module command does

                  Note that the module commands only modify environment variables. For instance, running module swap cluster/donphan will update your shell environment so that qsub submits a job to the donphan cluster, but our active shell session is still running on the login node.

                  It is important to understand that while module commands affect your session environment, they do not change where the commands your are running are being executed: they will still be run on the login node you are on.

                  When you submit a job script however, the commands in the job script will be run on a workernode of the cluster the job was submitted to (like donphan).

                  For detailed information about module commands, read the running batch jobs chapter.

                  "}, {"location": "getting_started/#wait-for-job-to-be-executed", "title": "Wait for job to be executed", "text": "

                  Your job is put into a queue before being executed, so it may take a while before it actually starts. (see when will my job start? for scheduling policy).

                  You can get an overview of the active jobs using the qstat command:

                  $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:00  Q donphan\n

                  Eventually, after entering qstat again you should see that your job has started running:

                  $ qstat\nJob ID     Name             User            Time Use S Queue\n---------- ---------------- --------------- -------- - -------\n123456     run.sh           vsc40000        0:00:01  R donphan\n

                  If you don't see your job in the output of the qstat command anymore, your job has likely completed.

                  Read this section on how to interpret the output.

                  "}, {"location": "getting_started/#inspect-your-results", "title": "Inspect your results", "text": "

                  When your job finishes it generates 2 output files:

                  • One for normal output messages (stdout output channel).
                  • One for warning and error messages (stderr output channel).

                  By default located in the directory where you issued qsub.

                  Info

                  For more information about the stdout and stderr output channels, see this section.

                  In our example when running ls in the current directory you should see 2 new files:

                  • run.sh.o123456, containing normal output messages produced by job 123456;
                  • run.sh.e123456, containing errors and warnings produced by job 123456.

                  Info

                  run.sh.e123456 should be empty (no errors or warnings).

                  Use your own job ID

                  Replace 123456 with the jobid you got from the qstat command (see above) or simply look for added files in your current directory by running ls.

                  When examining the contents of run.sh.o123456 you will see something like this:

                  Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\nEpoch 1/5\n1875/1875 [==============================] - 2s 823us/step - loss: 0.2960 - accuracy: 0.9133\nEpoch 2/5\n1875/1875 [==============================] - 1s 771us/step - loss: 0.1427 - accuracy: 0.9571\nEpoch 3/5\n1875/1875 [==============================] - 1s 767us/step - loss: 0.1070 - accuracy: 0.9675\nEpoch 4/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0881 - accuracy: 0.9727\nEpoch 5/5\n1875/1875 [==============================] - 1s 764us/step - loss: 0.0741 - accuracy: 0.9768\n313/313 - 0s - loss: 0.0782 - accuracy: 0.9764\n

                  Hurray \ud83c\udf89, we trained a deep learning model and achieved 97,64 percent accuracy.

                  Warning

                  When using TensorFlow specifically, you should actually submit jobs to a GPU cluster for better performance, see GPU clusters.

                  For the purpose of this example, we are running a very small TensorFlow workload on a CPU-only cluster.

                  "}, {"location": "getting_started/#next-steps", "title": "Next steps", "text": "
                  • Running interactive jobs
                  • Running jobs with input/output data
                  • Multi core jobs/Parallel Computing
                  • Interactive and debug cluster

                  For more examples see Program examples and Job script examples

                  "}, {"location": "gpu/", "title": "GPU clusters", "text": ""}, {"location": "gpu/#submitting-jobs", "title": "Submitting jobs", "text": "

                  To submit jobs to the joltik GPU cluster, where each node provides 4 NVIDIA V100 GPUs (each with 32GB of GPU memory), use:

                  module swap cluster/joltik\n

                  To submit to the accelgor GPU cluster, where each node provides 4 NVIDIA A100 GPUs (each with 80GB GPU memory), use:

                  module swap cluster/accelgor\n

                  Then use the familiar qsub, qstat, etc.\u00a0commands, taking into account the guidelines outlined in section Requesting (GPU) resources.

                  "}, {"location": "gpu/#interactive-jobs", "title": "Interactive jobs", "text": "

                  To interactively experiment with GPUs, you can submit an interactive job using qsub -I (and request one or more GPUs, see section\u00a0Requesting (GPU) resources).

                  Note that due to a bug in Slurm you will currently not be able to be able to interactively use MPI software that requires access to the GPUs. If you need this, please contact use via hpc@ugent.be.

                  "}, {"location": "gpu/#hardware", "title": "Hardware", "text": "

                  See https://www.ugent.be/hpc/en/infrastructure.

                  "}, {"location": "gpu/#requesting-gpu-resources", "title": "Requesting (GPU) resources", "text": "

                  There are 2 main ways to ask for GPUs as part of a job:

                  • Either as a node property (similar to the number of cores per node specified via ppn) using -l nodes=X:ppn=Y:gpus=Z (where the ppn=Y is optional), or as a separate resource request (similar to the amount of memory) via -l gpus=Z. Both notations give exactly the same result. The -l gpus=Z is convenient if you only need one node and you are fine with the default number of cores per GPU. The -l nodes=...:gpus=Z notation is required if you want to run with full control or in multinode cases like MPI jobs. If you do not specify the number of GPUs by just using -l gpus, you get by default 1 GPU.

                  • As a resource of its own, via --gpus X. In this case however, you are not guaranteed that the GPUs are on the same node, so your script or code must be able to deal with this.

                  Some background:

                  • The GPUs are constrained to the jobs (like the CPU cores), but do not run in so-called \"exclusive\" mode.

                  • The GPUs run with the so-called \"persistence daemon\", so the GPUs is not re-initialised between jobs.

                  "}, {"location": "gpu/#attention-points", "title": "Attention points", "text": "

                  Some important attention points:

                  • For MPI jobs, we recommend the (new) wrapper mypmirun from the vsc-mympirun module (pmi is the background mechanism to start the MPI tasks, and is different from the usual mpirun that is used by the mympirun wrapper). At some later point, we might promote the mypmirun tool or rename it, to avoid the confusion in the naming.

                  • Sharing GPUs requires MPS. The Slurm built-in MPS does not really do want you want, so we will provide integration with mypmirun and wurker.

                  • For parallel work, we are working on a wurker wrapper from the vsc-mympirun module that supports GPU placement and MPS, without any limitations wrt the requested resources (i.e. also support the case where GPUs are spread heterogeneous over nodes from using the --gpus Z option).

                  • Both mypmirun and wurker will try to do the most optimised placement of cores and tasks, and will provide 1 (optimal) GPU per task/MPI rank, and set one so-called visible device (i.e. CUDA_VISIBLE_DEVICES only has 1 ID). The actual devices are not constrained to the ranks, so you can access all devices requested in the job. We know that at this moment, this is not working properly, but we are working on this. We advise against trying to fix this yourself.

                  "}, {"location": "gpu/#software-with-gpu-support", "title": "Software with GPU support", "text": "

                  Use module avail to check for centrally installed software.

                  The subsections below only cover a couple of installed software packages, more are available.

                  "}, {"location": "gpu/#gromacs", "title": "GROMACS", "text": "

                  Please consult module avail GROMACS for a list of installed versions.

                  "}, {"location": "gpu/#horovod", "title": "Horovod", "text": "

                  Horovod can be used for (multi-node) multi-GPU TensorFlow/PyTorch calculations.

                  Please consult module avail Horovod for a list of installed versions.

                  Horovod supports TensorFlow, Keras, PyTorch and MxNet (see https://github.com/horovod/horovod#id9), but should be run as an MPI application with mypmirun. (Horovod also provides its own wrapper horovodrun, not sure if it handles placement and others correctly).

                  At least for simple TensorFlow benchmarks, it looks like Horovod is a bit faster than usual autodetect multi-GPU TensorFlow without horovod, but it comes at the cost of the code modifications to use horovod.

                  "}, {"location": "gpu/#pytorch", "title": "PyTorch", "text": "

                  Please consult module avail PyTorch for a list of installed versions.

                  "}, {"location": "gpu/#tensorflow", "title": "TensorFlow", "text": "

                  Please consult module avail TensorFlow for a list of installed versions.

                  Note: for running TensorFlow calculations on multiple GPUs and/or on more than one workernode, use Horovod, see section Horovod.

                  "}, {"location": "gpu/#example-tensorflow-job-script", "title": "Example TensorFlow job script", "text": "TensorFlow_GPU.sh
                  #!/bin/bash\n#PBS -l walltime=5:0:0\n#PBS -l nodes=1:ppn=quarter:gpus=1\n\nmodule load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1\n\ncd $PBS_O_WORKDIR\npython example.py\n
                  "}, {"location": "gpu/#alphafold", "title": "AlphaFold", "text": "

                  Please consult module avail AlphaFold for a list of installed versions.

                  For more information on using AlphaFold, we strongly recommend the VIB-UGent course available at https://elearning.bits.vib.be/courses/alphafold.

                  "}, {"location": "gpu/#getting-help", "title": "Getting help", "text": "

                  In case of questions or problems, please contact the HPC-UGent team via hpc@ugent.be, and clearly indicate that your question relates to the joltik cluster by adding [joltik] in the email subject.

                  "}, {"location": "interactive_debug/", "title": "Interactive and debug cluster", "text": ""}, {"location": "interactive_debug/#purpose", "title": "Purpose", "text": "

                  The purpose of this cluster is to give the user an environment where there should be no waiting in the queue to get access to a limited number of resources. This environment allows a user to immediately start working, and is the ideal place for interactive work such as development, debugging and light production workloads (typically sufficient for training and/or courses).

                  This environment should be seen as an extension or even replacement of the login nodes, instead of a dedicated compute resource. The interactive cluster is overcommitted, which means that more CPU cores can be requested for jobs than physically exist in the cluster. Obviously, the performance of this cluster heavily depends on the workloads and the actual overcommit usage. Be aware that jobs can slow down or speed up during their execution.

                  Due to the restrictions and sharing of the CPU resources (see section\u00a0Restrictions and overcommit factor) jobs on this cluster should normally start more or less immediately. The tradeoff is that performance must not be an issue for the submitted jobs. This means that typical workloads for this cluster should be limited to:

                  • Interactive jobs (see chapter\u00a0Running interactive jobs)

                  • Cluster desktop sessions (see chapter\u00a0Using the HPC-UGent web portal)

                  • Jobs requiring few resources

                  • Debugging programs

                  • Testing and debugging job scripts

                  "}, {"location": "interactive_debug/#submitting-jobs", "title": "Submitting jobs", "text": "

                  To submit jobs to the HPC-UGent interactive and debug cluster nicknamed donphan, first use:

                  module swap cluster/donphan\n

                  Then use the familiar qsub, qstat, etc. commands (see chapter\u00a0Running batch jobs).

                  "}, {"location": "interactive_debug/#restrictions-and-overcommit-factor", "title": "Restrictions and overcommit factor", "text": "

                  Some limits are in place for this cluster:

                  • each user may have at most 5 jobs in the queue (both running and waiting to run);

                  • at most 3 jobs per user can be running at the same time;

                  • running jobs may allocate no more than 8 CPU cores and no more than 27200 MiB of memory in total, per user;

                  In addition, the cluster has an overcommit factor of 6. This means that 6 times more cores can be allocated than physically exist. Simultaneously, the default memory per core is 6 times less than what would be available on a non-overcommitted cluster.

                  Please note that based on the (historical) workload of the interactive and debug cluster, the above restrictions and the overcommitment ratio might change without prior notice.

                  "}, {"location": "interactive_debug/#shared-gpus", "title": "Shared GPUs", "text": "

                  Each node in the donphan cluster has a relatively small GPU that is shared between all jobs. This means that you don't need to reserve it and thus possibly wait for it. But this also has a downside for performance and security: jobs might be competing for the same GPU resources (cores, memory or encoders) without any preset fairshare and there is no guarantee one job cannot access another job's memory (as opposed to having reserved GPUs in the GPU clusters).

                  All software should behave the same as on the dedicated GPU clusters (e.g. using CUDA or OpenGL acceleration from a cluster desktop via the webportal).

                  "}, {"location": "introduction/", "title": "Introduction to HPC", "text": ""}, {"location": "introduction/#what-is-hpc", "title": "What is HPC?", "text": "

                  \"High Performance Computing\" (HPC) is computing on a \"Supercomputer\", a computer with at the frontline of contemporary processing capacity -- particularly speed of calculation and available memory.

                  While the supercomputers in the early days (around 1970) used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of \"off-the-shelf\" processors were the norm. A large number of dedicated processors are placed in close proximity to each other in a computer cluster.

                  A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.

                  The components of a cluster are usually connected to each other through fast local area networks (\"LAN\") with each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high-speed networks, and software for high performance distributed computing.

                  Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.

                  Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 1

                  "}, {"location": "introduction/#what-is-the-hpc-ugent-infrastructure", "title": "What is the HPC-UGent infrastructure?", "text": "

                  The HPC is a collection of computers with AMD and/or Intel CPUs, running a Linux operating system, shaped like pizza boxes and stored above and next to each other in racks, interconnected with copper and fiber cables. Their number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops) and even in teraflops.

                  The HPC-UGent infrastructure relies on parallel-processing technology to offer UGent researchers an extremely fast solution for all their data processing needs.

                  The HPC currently consists of:

                  a set of different compute clusters. For an up to date list of all clusters and their hardware, see https://vscdocumentation.readthedocs.io/en/latest/gent/tier2_hardware.html.

                  Job management and job scheduling are performed by Slurm with a Torque frontend. We advise users to adhere to Torque commands mentioned in this document.

                  "}, {"location": "introduction/#what-the-hpc-infrastucture-is-not", "title": "What the HPC infrastucture is not", "text": "

                  The HPC infrastructure is not a magic computer that automatically:

                  1. runs your PC-applications much faster for bigger problems;

                  2. develops your applications;

                  3. solves your bugs;

                  4. does your thinking;

                  5. ...

                  6. allows you to play games even faster.

                  The HPC does not replace your desktop computer.

                  "}, {"location": "introduction/#is-the-hpc-a-solution-for-my-computational-needs", "title": "Is the HPC a solution for my computational needs?", "text": ""}, {"location": "introduction/#batch-or-interactive-mode", "title": "Batch or interactive mode?", "text": "

                  Typically, the strength of a supercomputer comes from its ability to run a huge number of programs (i.e., executables) in parallel without any user interaction in real time. This is what is called \"running in batch mode\".

                  It is also possible to run programs at the HPC, which require user interaction. (pushing buttons, entering input data, etc.). Although technically possible, the use of the HPC might not always be the best and smartest option to run those interactive programs. Each time some user interaction is needed, the computer will wait for user input. The available computer resources (CPU, storage, network, etc.) might not be optimally used in those cases. A more in-depth analysis with the HPC staff can unveil whether the HPC is the desired solution to run interactive programs. Interactive mode is typically only useful for creating quick visualisations of your data without having to copy your data to your desktop and back.

                  "}, {"location": "introduction/#what-are-cores-processors-and-nodes", "title": "What are cores, processors and nodes?", "text": "

                  In this manual, the terms core, processor and node will be frequently used, so it's useful to understand what they are.

                  Modern servers, also referred to as (worker)nodes in the context of HPC, include one or more sockets, each housing a multi-core processor (next to memory, disk(s), network cards, ...). A modern processor consists of multiple CPUs or cores that are used to execute computations.

                  "}, {"location": "introduction/#parallel-or-sequential-programs", "title": "Parallel or sequential programs?", "text": ""}, {"location": "introduction/#parallel-programs", "title": "Parallel programs", "text": "

                  Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (\"in parallel\").

                  Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multicore processors.

                  The two parallel programming paradigms most used in HPC are:

                  • OpenMP for shared memory systems (multithreading): on multiple cores of a single node

                  • MPI for distributed memory systems (multiprocessing): on multiple nodes

                  Parallel programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

                  "}, {"location": "introduction/#sequential-programs", "title": "Sequential programs", "text": "

                  Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single workernode. It does not become faster by just throwing more cores at it: it can only use one core.

                  It is perfectly possible to also run purely sequential programs on the HPC.

                  Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.

                  "}, {"location": "introduction/#what-programming-languages-can-i-use", "title": "What programming languages can I use?", "text": "

                  You can use any programming language, any software package and any library provided it has a version that runs on Linux, specifically, on the version of Linux that is installed on the compute nodes, RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

                  For the most common programming languages, a compiler is available on RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty). Supported and common programming languages on the HPC are C/C++, FORTRAN, Java, Perl, Python, MATLAB, R, etc.

                  Supported and commonly used compilers are GCC and Intel.

                  Additional software can be installed \"on demand\". Please contact the HPC staff to see whether the HPC can handle your specific requirements.

                  "}, {"location": "introduction/#what-operating-systems-can-i-use", "title": "What operating systems can I use?", "text": "

                  All nodes in the HPC cluster run under RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty), which is a specific version of Red Hat Enterprise Linux. This means that all programs (executables) should be compiled for RHEL 8.8 (accelgor, doduo, donphan, gallade, joltik, skitty).

                  Users can connect from any computer in the UGent network to the HPC, regardless of the Operating System that they are using on their personal computer. Users can use any of the common Operating Systems (such as Windows, macOS or any version of Linux/Unix/BSD) and run and control their programs on the HPC.

                  A user does not need to have prior knowledge about Linux; all of the required knowledge is explained in this tutorial.

                  "}, {"location": "introduction/#what-does-a-typical-workflow-look-like", "title": "What does a typical workflow look like?", "text": "

                  A typical workflow looks like:

                  1. Connect to the login nodes with SSH (see First Time connection to the HPC infrastructure)

                  2. Transfer your files to the cluster (see Transfer Files to/from the HPC)

                  3. Optional: compile your code and test it (for compiling, see Compiling and testing your software on the HPC)

                  4. Create a job script and submit your job (see Running batch jobs)

                  5. Get some coffee and be patient:

                    1. Your job gets into the queue

                    2. Your job gets executed

                    3. Your job finishes

                  6. Study the results generated by your jobs, either on the cluster or after downloading them locally.

                  "}, {"location": "introduction/#what-is-the-next-step", "title": "What is the next step?", "text": "

                  When you think that the HPC is a useful tool to support your computational needs, we encourage you to acquire a VSC-account (as explained in Getting a HPC Account), read Connecting to the HPC infrastructure, \"Setting up the environment\", and explore chapters\u00a0Running interactive jobs to\u00a0Fine-tuning Job Specifications which will help you to transfer and run your programs on the HPC cluster.

                  Do not hesitate to contact the HPC staff for any help.

                  1. Wikipedia: http://en.wikipedia.org/wiki/Supercomputer \u21a9

                  "}, {"location": "jobscript_examples/", "title": "Job script examples", "text": ""}, {"location": "jobscript_examples/#simple-job-script-template", "title": "Simple job script template", "text": "

                  This is a template for a job script, with commonly used parameters. The basic parameters should always be used. Some notes on the situational parameters:

                  • -l mem: If no memory parameter is given, the job gets access to an amount of memory proportional to the amount of cores requested. See also: Job failed: SEGV Segmentation fault

                  • -m/-M: the -m option will send emails to your email address registerd with VSC. Only if you want emails at some other address, you should use the -M option.

                  • Replace the \"-placeholder text-\" with real entries. This notation is used to ensure qsub rejects invalid options.

                  • To use a situational parameter, remove one '#' at the beginning of the line.

                  simple_jobscript.sh
                  #!/bin/bash\n\n# Basic parameters\n#PBS -N jobname           ## Job name\n#PBS -l nodes=1:ppn=2     ## 1 node, 2 processors per node (ppn=all to get a full node)\n#PBS -l walltime=01:00:00 ## Max time your job will run (no more than 72:00:00)\n\n# Situational parameters: remove one '#' at the front to use\n##PBS -l gpus=1            ## GPU amount (only on accelgor or joltik)\n##PBS -l mem=32gb          ## If not used, memory will be available proportional to the max amount\n##PBS -m abe               ## Email notifications (abe=aborted, begin and end)\n##PBS -M -email_address-   ## ONLY if you want to use a different email than your VSC address\n##PBS -A -project-         ## Project name when credits are required (only Tier 1)\n\n##PBS -o -filename-        ## Output log\n##PBS -e -filename-        ## Error log\n\n\nmodule load [module]\nmodule load [module]\n\ncd $PBS_O_WORKDIR         # Change working directory to the location where the job was submmitted\n\n[commands]\n
                  "}, {"location": "jobscript_examples/#single-core-job", "title": "Single-core job", "text": "

                  Here's an example of a single-core job script:

                  single_core.sh
                  #!/bin/bash\n#PBS -N count_example         ## job name\n#PBS -l nodes=1:ppn=1         ## single-node job, single core\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load Python/3.6.4-intel-2018a\n# copy input data from location where job was submitted from\ncp $PBS_O_WORKDIR/input.txt $TMPDIR\n# go to temporary working directory (on local disk) & run\ncd $TMPDIR\npython -c \"print(len(open('input.txt').read()))\" > output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n
                  1. Using #PBS header lines, we specify the resource requirements for the job, see Apendix B for a list of these options.

                  2. A module for Python 3.6 is loaded, see also section Modules.

                  3. We stage the data in: the file input.txt is copied into the \"working\" directory, see chapter Running jobs with input/output data.

                  4. The main part of the script runs a small Python program that counts the number of characters in the provided input file input.txt.

                  5. We stage the results out: the output file output.txt is copied from the \"working directory\" ($TMPDIR|) to a unique directory in $VSC_DATA. For a list of possible storage locations, see subsection Pre-defined user directories.

                  "}, {"location": "jobscript_examples/#multi-core-job", "title": "Multi-core job", "text": "

                  Here's an example of a multi-core job script that uses mympirun:

                  multi_core.sh
                  #!/bin/bash\n#PBS -N mpi_hello             ## job name\n#PBS -l nodes=2:ppn=all       ## 2 nodes, all cores per node\n#PBS -l walltime=2:00:00      ## max. 2h of wall time\nmodule load intel/2017b\nmodule load vsc-mympirun      ## We don't use a version here, this is on purpose\n# go to working directory, compile and run MPI hello world\ncd $PBS_O_WORKDIR\nmpicc mpi_hello.c -o mpi_hello\nmympirun ./mpi_hello\n

                  An example MPI hello world program can be downloaded from https://github.com/hpcugent/vsc-mympirun/blob/master/testscripts/mpi_helloworld.c.

                  "}, {"location": "jobscript_examples/#running-a-command-with-a-maximum-time-limit", "title": "Running a command with a maximum time limit", "text": "

                  If you want to run a job, but you are not sure it will finish before the job runs out of walltime and you want to copy data back before, you have to stop the main command before the walltime runs out and copy the data back.

                  This can be done with the timeout command. This command sets a limit of time a program can run for, and when this limit is exceeded, it kills the program. Here's an example job script using timeout:

                  timeout.sh
                  #!/bin/bash\n#PBS -N timeout_example\n#PBS -l nodes=1:ppn=1        ## single-node job, single core\n#PBS -l walltime=2:00:00     ## max. 2h of wall time\n\n# go to temporary working directory (on local disk)\ncd $TMPDIR\n# This command will take too long (1400 minutes is longer than our walltime)\n# $PBS_O_WORKDIR/example_program.sh 1400 output.txt\n\n# So we put it after a timeout command\n# We have a total of 120 minutes (2 x 60) and we instruct the script to run for\n# 100 minutes, but timeout after 90 minute,\n# so we have 30 minutes left to copy files back. This should\n#  be more than enough.\ntimeout -s SIGKILL 90m $PBS_O_WORKDIR/example_program.sh 100 output.txt\n# copy back output data, ensure unique filename using $PBS_JOBID\ncp output.txt $VSC_DATA/output_${PBS_JOBID}.txt\n

                  The example program used in this script is a dummy script that simply sleeps a specified amount of minutes:

                  example_program.sh
                  #!/bin/bash\n# This is an example program\n# It takes two arguments: a number of times to loop and a file to write to\n# In total, it will run for (the number of times to loop) minutes\n\nif [ $# -ne 2 ]; then\necho \"Usage: ./example_program amount filename\" && exit 1\nfi\n\nfor ((i = 0; i < $1; i++ )); do\necho \"${i} => $(date)\" >> $2\nsleep 60\ndone\n
                  "}, {"location": "jupyter/", "title": "Jupyter notebook", "text": ""}, {"location": "jupyter/#what-is-a-jupyter-notebook", "title": "What is a Jupyter notebook", "text": "

                  A Jupyter notebook is an interactive, web-based environment that allows you to create documents that contain live code, equations, visualizations, and plaintext. The code blocks in these documents can be used to write Python, Java, R and Julia code, among others. The combination of code executions with text and visual outputs make it a useful tool for data analysis, machine learning and educational purposes.

                  "}, {"location": "jupyter/#using-jupyter-notebooks-on-the-hpc", "title": "Using Jupyter Notebooks on the HPC", "text": ""}, {"location": "jupyter/#launching-a-notebook-using-the-web-portal", "title": "Launching a notebook using the web portal", "text": "

                  Through the HPC-UGent web portal you can easily start a Jupyter notebook on a workernode, via the Jupyter Notebook button under the Interactive Apps menu item.

                  After starting the Jupyter notebook using the Launch button, you will see it being added in state Queued in the overview of interactive sessions (see My Interactive Sessions menu item):

                  When your job hosting the Jupyter notebook starts running, the status will first change the Starting:

                  and eventually the status will change to Running, and you will be able to connect to the Jupyter environment using the blue Connect to Jupyter button:

                  This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on File>New>Notebook:

                  "}, {"location": "jupyter/#using-extra-python-packages", "title": "Using extra Python packages", "text": "

                  A number of Python packages are readily available in modules on the HPC. To illustrate how to use them in a Jupyter notebook, we will make use of an example where we want to use numpy in our notebook. The first thing we need to do is finding the modules that contain our package of choice. For numpy, this would be the SciPy-bundle modules.

                  To find the appropriate modules, it is recommended to use the shell within the web portal under Clusters>>_login Shell Access.

                  We can see all available versions of the SciPy module by using module avail SciPy-bundle:

                  $ module avail SciPy-bundle\n\n------------------ /apps/gent/RHEL8/zen2-ib/modules/all ------------------\n    SciPy-bundle/2022.05-foss-2022a    SciPy-bundle/2023.11-gfbf-2023b (D)\nSciPy-bundle/2023.07-gfbf-2023a\n\n  Where:\n   D:  Default Module\n...\n

                  Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the JupyterNotebook version field when creating a notebook. In our example 7.2.0 is the version of the notebook and GCCcore/13.2.0 is the toolchain used.

                  Module names include the toolchain that was used to install the module (for example gfbf-2023b in SciPy-bundle/2023.11-gfbf-2023b means that that module uses the toolchain gfbf/2023). To see which modules are compatible with each other, you can check the table on the page about Module conflicts. Another way to find out which GCCcore subtoolchain goes with the particular toolchain of the module (such as gfbf/2023b) is to use module show. In particular using module show <toolchain of the module> | grep GCC (before the module has been loaded) will return this GCCcore version.

                  $ module show gfbf/2023b | grep GCC\nGNU Compiler Collection (GCC) based compiler toolchain, including\nwhatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including\nload(\"GCC/13.2.0\")\nload(\"FlexiBLAS/3.3.1-GCC-13.2.0\")\nload(\"FFTW/3.3.10-GCC-13.2.0\")\n

                  The toolchain used can then for example be found within the line load(\"GCC/13.2.0\") and the included Python packages under the line Included extensions.

                  It is also recommended to doublecheck the compatibility of the Jupyter notebook version and the extra modules by loading them all in a shell environment. To do so, find the module containing the correct Jupyter notebook version (for our example case this is JupyterNotebook/7.2.0-GCCcore-13.2.0) and then use module load <module_name> for every module as follows:

                  $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.11-gfbf-2023b\n
                  This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook

                  If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see here).

                  $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0\n$ module load SciPy-bundle/2023.07-gfbf-2023a\nLmod has detected the following error:  ...\n

                  Now that we found the right module for the notebook, add module load <module_name> in the Custom code field when creating a notebook and you can make use of the packages within that notebook.

                  "}, {"location": "known_issues/", "title": "Known issues", "text": "

                  This page provides details on a couple of known problems, and the workarounds that are available for them.

                  If you have any questions related to these issues, please contact the HPC-UGent team.

                  • Operation not permitted error for MPI applications
                  "}, {"location": "known_issues/#openmpi_libfabric_operation_not_permitted", "title": "Operation not permitted error for MPI applications", "text": "

                  When running an MPI application that was installed with a foss toolchain, you may run into crash with an error message like:

                  Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\n

                  This error means that an internal problem has occurred in OpenMPI.

                  "}, {"location": "known_issues/#cause-of-the-problem", "title": "Cause of the problem", "text": "

                  This problem was introduced with the OS updates that were installed on the HPC-UGent and VSC Tier-1 Hortense clusters mid February 2024, most likely due to updating the Mellanox OFED kernel module.

                  It seems that having OpenMPI consider both UCX and libfabric as \"backends\" to use the high-speed interconnect (InfiniBand) is causing this problem: the error message is reported by UCX, but the problem only occurs when OpenMPI is configured to also consider libfabric.

                  "}, {"location": "known_issues/#affected-software", "title": "Affected software", "text": "

                  We have been notified that this error may occur with various applications, including (but not limited to) CP2K, LAMMPS, netcdf4-python, SKIRT, ...

                  "}, {"location": "known_issues/#workarounds", "title": "Workarounds", "text": ""}, {"location": "known_issues/#openmpi_libfabric_mympirun", "title": "Use latest vsc-mympirun", "text": "

                  A workaround as been implemented in mympirun (version 5.4.0).

                  Make sure you use the latest version of vsc-mympirun by using the following (version-less) module load statement in your job scripts:

                  module load vsc-mympirun\n

                  and launch your MPI application using the mympirun command.

                  For more information, see the mympirun documentation.

                  "}, {"location": "known_issues/#openmpi_libfabric_env_vars", "title": "Configure OpenMPI to not use libfabric via environment variables", "text": "

                  If using mympirun is not an option, you can configure OpenMPI to not consider libfabric (and only use UCX) by setting the following environment variables (in your job script or session environment):

                  export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
                  "}, {"location": "known_issues/#resolution", "title": "Resolution", "text": "

                  We will re-install the affected OpenMPI installations during the scheduled maintenance of 13-17 May 2024 (see also VSC status page).

                  "}, {"location": "multi_core_jobs/", "title": "Multi core jobs/Parallel Computing", "text": ""}, {"location": "multi_core_jobs/#why-parallel-programming", "title": "Why Parallel Programming?", "text": "

                  There are two important motivations to engage in parallel programming.

                  1. Firstly, the need to decrease the time to solution: distributing your code over C cores holds the promise of speeding up execution times by a factor C. All modern computers (and probably even your smartphone) are equipped with multi-core processors capable of parallel processing.

                  2. The second reason is problem size: distributing your code over N nodes increases the available memory by a factor N, and thus holds the promise of being able to tackle problems which are N times bigger.

                  On a desktop computer, this enables a user to run multiple programs and the operating system simultaneously. For scientific computing, this means you have the ability in principle of splitting up your computations into groups and running each group on its own core.

                  There are multiple different ways to achieve parallel programming. The table below gives a (non-exhaustive) overview of problem independent approaches to parallel programming. In addition there are many problem specific libraries that incorporate parallel capabilities. The next three sections explore some common approaches: (raw) threads, OpenMP and MPI.

                  Tool Available languages binding Limitations Raw threads (pthreads, boost::threading, ...) Threading libraries are available for all common programming languages Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for HPC. Thread management is hard. OpenMP Fortran/C/C++ Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus C/C++ Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. MPI Fortran/C/C++, Python Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. Global Arrays library C/C++, Python Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.

                  Tip

                  You can request more nodes/cores by adding following line to your run script.

                  #PBS -l nodes=2:ppn=10\n
                  This queues a job that claims 2 nodes and 10 cores.

                  Warning

                  Just requesting more nodes and/or cores does not mean that your job will automatically run faster. You can find more about this here.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-threads", "title": "Parallel Computing with threads", "text": "

                  Multi-threading is a widespread programming and execution model that allows multiple threads to exist within the context of a single process. These threads share the process' resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multi-threading can also be applied to a single process to enable parallel execution on a multiprocessing system.

                  This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs or across a cluster of machines --- because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviours. In order for data to be correctly manipulated, threads will often need to synchronise in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

                  Threads are a way that a program can spawn concurrent units of processing that can then be delegated by the operating system to multiple processing cores. Clearly the advantage of a multithreaded program (one that uses multiple threads that are assigned to multiple processing cores) is that you can achieve big speedups, as all cores of your CPU (and all CPUs if you have more than one) are used at the same time.

                  Here is a simple example program that spawns 5 threads, where each one runs a simple function that only prints \"Hello from thread\".

                  Go to the example directory:

                  cd ~/examples/Multi-core-jobs-Parallel-Computing\n

                  Note

                  If the example directory is not yet present, copy it to your home directory:

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  Study the example first:

                  T_hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase of working with threads\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n#define NTHREADS 5\n\nvoid *myFun(void *x)\n{\nint tid;\ntid = *((int *) x);\nprintf(\"Hello from thread %d!\\n\", tid);\nreturn NULL;\n}\n\nint main(int argc, char *argv[])\n{\npthread_t threads[NTHREADS];\nint thread_args[NTHREADS];\nint rc, i;\n\n/* spawn the threads */\nfor (i=0; i<NTHREADS; ++i)\n{\nthread_args[i] = i;\nprintf(\"spawning thread %d\\n\", i);\nrc = pthread_create(&threads[i], NULL, myFun, (void *) &thread_args[i]);\n}\n\n/* wait for threads to finish */\nfor (i=0; i<NTHREADS; ++i) {\nrc = pthread_join(threads[i], NULL);\n}\n\nreturn 1;\n}\n

                  And compile it (whilst including the thread library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -o T_hello T_hello.c -lpthread\n$ ./T_hello\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

                  Now, run it on the cluster and check the output:

                  $ qsub T_hello.pbs\n123456\n$ more T_hello.pbs.o123456\nspawning thread 0\nspawning thread 1\nspawning thread 2\nHello from thread 0!\nHello from thread 1!\nHello from thread 2!\nspawning thread 3\nspawning thread 4\nHello from thread 3!\nHello from thread 4!\n

                  Tip

                  If you plan engaging in parallel programming using threads, this book may prove useful: Professional Multicore Programming: Design and Implementation for C++ Developers. Cameron Hughes and Tracey Hughes. Wrox 2008.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-openmp", "title": "Parallel Computing with OpenMP", "text": "

                  OpenMP is an API that implements a multi-threaded, shared memory form of parallelism. It uses a set of compiler directives (statements that you add to your code and that are recognised by your Fortran/C/C++ compiler if OpenMP is enabled or otherwise ignored) that are incorporated at compile-time to generate a multi-threaded version of your code. You can think of Pthreads (above) as doing multi-threaded programming \"by hand\", and OpenMP as a slightly more automated, higher-level API to make your program multithreaded. OpenMP takes care of many of the low-level details that you would normally have to implement yourself, if you were using Pthreads from the ground up.

                  An important advantage of OpenMP is that, because it uses compiler directives, the original serial version stays intact, and minimal changes (in the form of compiler directives) are necessary to turn a working serial code into a working parallel code.

                  Here is the general code structure of an OpenMP program:

                  #include <omp.h>\nmain ()  {\nint var1, var2, var3;\n// Serial code\n// Beginning of parallel section. Fork a team of threads.\n// Specify variable scoping\n\n#pragma omp parallel private(var1, var2) shared(var3)\n{\n// Parallel section executed by all threads\n// All threads join master thread and disband\n}\n// Resume serial code\n}\n

                  "}, {"location": "multi_core_jobs/#private-versus-shared-variables", "title": "Private versus Shared variables", "text": "

                  By using the private() and shared() clauses, you can specify variables within the parallel region as being shared, i.e., visible and accessible by all threads simultaneously, or private, i.e., private to each thread, meaning each thread will have its own local copy. In the code example below for parallelising a for loop, you can see that we specify the thread_id and nloops variables as private.

                  "}, {"location": "multi_core_jobs/#parallelising-for-loops-with-openmp", "title": "Parallelising for loops with OpenMP", "text": "

                  Parallelising for loops is really simple (see code below). By default, loop iteration counters in OpenMP loop constructs (in this case the i variable) in the for loop are set to private variables.

                  omp1.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: Showcase program for OMP loops\n */\n/* OpenMP_loop.c  */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char **argv)\n{\nint i, thread_id, nloops;\n\n#pragma omp parallel private(thread_id, nloops)\n{\nnloops = 0;\n\n#pragma omp for\nfor (i=0; i<1000; ++i)\n{\n++nloops;\n}\nthread_id = omp_get_thread_num();\nprintf(\"Thread %d performed %d iterations of the loop.\\n\", thread_id, nloops );\n}\n\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp1 omp1.c\n$ ./omp1\nThread 6 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 1 performed 125 iterations of the loop.\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp1.pbs\n$ cat omp1.pbs.o*\nThread 1 performed 125 iterations of the loop.\nThread 4 performed 125 iterations of the loop.\nThread 3 performed 125 iterations of the loop.\nThread 0 performed 125 iterations of the loop.\nThread 5 performed 125 iterations of the loop.\nThread 7 performed 125 iterations of the loop.\nThread 2 performed 125 iterations of the loop.\nThread 6 performed 125 iterations of the loop.\n
                  "}, {"location": "multi_core_jobs/#critical-code", "title": "Critical Code", "text": "

                  Using OpenMP you can specify something called a \"critical\" section of code. This is code that is performed by all threads, but is only performed one thread at a time (i.e., in serial). This provides a convenient way of letting you do things like updating a global variable with local results from each thread, and you don't have to worry about things like other threads writing to that global variable at the same time (a collision).

                  omp2.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\n\n// make this a \"critical\" code section\n#pragma omp critical\n{\nprintf(\"Thread %d is adding its iterations (%d) to sum (%d), \", thread_id, priv_nloops, glob_nloops);\nglob_nloops += priv_nloops;\nprintf(\"total is now %d.\\n\", glob_nloops);\n}\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp2 omp2.c\n$ ./omp2\nThread 3 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 7 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 5 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 6 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 2 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 4 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 1 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 0 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp2.pbs\n$ cat omp2.pbs.o*\nThread 2 is adding its iterations (12500) to sum (0), total is now 12500.\nThread 0 is adding its iterations (12500) to sum (12500), total is now 25000.\nThread 1 is adding its iterations (12500) to sum (25000), total is now 37500.\nThread 4 is adding its iterations (12500) to sum (37500), total is now 50000.\nThread 7 is adding its iterations (12500) to sum (50000), total is now 62500.\nThread 3 is adding its iterations (12500) to sum (62500), total is now 75000.\nThread 5 is adding its iterations (12500) to sum (75000), total is now 87500.\nThread 6 is adding its iterations (12500) to sum (87500), total is now 100000.\nTotal # loop iterations is 100000\n
                  "}, {"location": "multi_core_jobs/#reduction", "title": "Reduction", "text": "

                  Reduction refers to the process of combining the results of several sub-calculations into a final result. This is a very common paradigm (and indeed the so-called \"map-reduce\" framework used by Google and others is very popular). Indeed we used this paradigm in the code example above, where we used the \"critical code\" directive to accomplish this. The map-reduce paradigm is so common that OpenMP has a specific directive that allows you to more easily implement this.

                  omp3.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: OpenMP Test Program\n */\n#include <stdio.h>\n#include <omp.h>\n\nint main(int argc, char *argv[])\n{\nint i, thread_id;\nint glob_nloops, priv_nloops;\nglob_nloops = 0;\n\n// parallelize this chunk of code\n#pragma omp parallel private(priv_nloops, thread_id) reduction(+:glob_nloops)\n{\npriv_nloops = 0;\nthread_id = omp_get_thread_num();\n\n// parallelize this for loop\n#pragma omp for\nfor (i=0; i<100000; ++i)\n{\n++priv_nloops;\n}\nglob_nloops += priv_nloops;\n}\nprintf(\"Total # loop iterations is %d\\n\", glob_nloops);\nreturn 0;\n}\n

                  And compile it (whilst including the \"openmp\" library) and run and test it on the login-node:

                  $ module load GCC\n$ gcc -fopenmp -o omp3 omp3.c\n$ ./omp3\nTotal # loop iterations is 100000\n

                  Now run it in the cluster and check the result again.

                  $ qsub omp3.pbs\n$ cat omp3.pbs.o*\nTotal # loop iterations is 100000\n
                  "}, {"location": "multi_core_jobs/#other-openmp-directives", "title": "Other OpenMP directives", "text": "

                  There are a host of other directives you can issue using OpenMP.

                  Some other clauses of interest are:

                  1. barrier: each thread will wait until all threads have reached this point in the code, before proceeding

                  2. nowait: threads will not wait until everybody is finished

                  3. schedule(type, chunk) allows you to specify how tasks are spawned out to threads in a for loop. There are three types of scheduling you can specify

                  4. if: allows you to parallelise only if a certain condition is met

                  5. ...\u00a0and a host of others

                  Tip

                  If you plan engaging in parallel programming using OpenMP, this book may prove useful: Using OpenMP - Portable Shared Memory Parallel Programming. By Barbara Chapman Gabriele Jost and Ruud van der Pas Scientific and Engineering Computation. 2005.

                  "}, {"location": "multi_core_jobs/#parallel-computing-with-mpi", "title": "Parallel Computing with MPI", "text": "

                  The Message Passing Interface (MPI) is a standard defining core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI, Intel MPI, M(VA)PICH and LAM/MPI.

                  In the context of this tutorial, you can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with Pthreads, and using a high-level API such as OpenMP. For a Message Passing Interface (MPI) application, a parallel task usually consists of a single executable running concurrently on multiple processors, with communication between the processes. This is shown in the following diagram:

                  The process numbers 0, 1 and 2 represent the process rank and have greater or less significance depending on the processing paradigm. At the minimum, Process 0 handles the input/output and determines what other processes are running.

                  The MPI interface allows you to manage allocation, communication, and synchronisation of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

                  One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don't have a dedicated cluster, you could still write a program using MPI that could run your program in parallel, across any collection of computers, as long as they are networked together.

                  Here is a \"Hello World\" program in MPI written in C. In this example, we send a \"Hello\" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

                  Study the MPI-programme and the PBS-file:

                  mpi_hello.c
                  /*\n * VSC        : Flemish Supercomputing Centre\n * Tutorial   : Introduction to HPC\n * Description: \"Hello World\" MPI Test Program\n */\n#include <stdio.h>\n#include <mpi.h>\n\n#include <mpi.h>\n#include <stdio.h>\n#include <string.h>\n\n#define BUFSIZE 128\n#define TAG 0\n\nint main(int argc, char *argv[])\n{\nchar idstr[32];\nchar buff[BUFSIZE];\nint numprocs;\nint myid;\nint i;\nMPI_Status stat;\n/* MPI programs start with MPI_Init; all 'N' processes exist thereafter */\nMPI_Init(&argc,&argv);\n/* find out how big the SPMD world is */\nMPI_Comm_size(MPI_COMM_WORLD,&numprocs);\n/* and this processes' rank is */\nMPI_Comm_rank(MPI_COMM_WORLD,&myid);\n\n/* At this point, all programs are running equivalently, the rank\n      distinguishes the roles of the programs in the SPMD model, with\n      rank 0 often used specially... */\nif(myid == 0)\n{\nprintf(\"%d: We have %d processors\\n\", myid, numprocs);\nfor(i=1;i<numprocs;i++)\n{\nsprintf(buff, \"Hello %d! \", i);\nMPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);\n}\nfor(i=1;i<numprocs;i++)\n{\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);\nprintf(\"%d: %s\\n\", myid, buff);\n}\n}\nelse\n{\n/* receive from rank 0: */\nMPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);\nsprintf(idstr, \"Processor %d \", myid);\nstrncat(buff, idstr, BUFSIZE-1);\nstrncat(buff, \"reporting for duty\", BUFSIZE-1);\n/* send to rank 0: */\nMPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);\n}\n\n/* MPI programs end with MPI Finalize; this is a weak synchronization point */\nMPI_Finalize();\nreturn 0;\n}\n
                  mpi_hello.pbs
                  #!/bin/bash\n\n#PBS -N mpihello\n#PBS -l walltime=00:05:00\n\n# assume a 40 core job\n#PBS -l nodes=2:ppn=20\n\n# make sure we are in the right directory in case writing files\ncd $PBS_O_WORKDIR\n\n# load the environment\n\nmodule load intel\n\nmpirun ./mpi_hello\n

                  and compile it:

                  $ module load intel\n$ mpiicc -o mpi_hello mpi_hello.c\n

                  mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI programs (see the chapter on compilation for details).

                  Run the parallel program:

                  $ qsub mpi_hello.pbs\n$ ls -l\ntotal 1024\n-rwxrwxr-x 1 vsc40000 8746 Sep 16 14:19 mpi_hello*\n-rw-r--r-- 1 vsc40000 1626 Sep 16 14:18 mpi_hello.c\n-rw------- 1 vsc40000    0 Sep 16 14:22 mpi_hello.o123456\n-rw------- 1 vsc40000  697 Sep 16 14:22 mpi_hello.o123456\n-rw-r--r-- 1 vsc40000  304 Sep 16 14:22 mpi_hello.pbs\n$ cat mpi_hello.o123456\n0: We have 16 processors\n0: Hello 1! Processor 1 reporting for duty\n0: Hello 2! Processor 2 reporting for duty\n0: Hello 3! Processor 3 reporting for duty\n0: Hello 4! Processor 4 reporting for duty\n0: Hello 5! Processor 5 reporting for duty\n0: Hello 6! Processor 6 reporting for duty\n0: Hello 7! Processor 7 reporting for duty\n0: Hello 8! Processor 8 reporting for duty\n0: Hello 9! Processor 9 reporting for duty\n0: Hello 10! Processor 10 reporting for duty\n0: Hello 11! Processor 11 reporting for duty\n0: Hello 12! Processor 12 reporting for duty\n0: Hello 13! Processor 13 reporting for duty\n0: Hello 14! Processor 14 reporting for duty\n0: Hello 15! Processor 15 reporting for duty\n

                  The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD = Single Program, Multiple Data) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

                  MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behaviour to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

                  Tip

                  mpirun does not always do the optimal core pinning and requires a few extra arguments to be the most efficient possible on a given system. At Ghent we have a wrapper around mpirun called mympirun. See for more information.

                  You will generally just start an MPI program on the by using mympirun instead of mpirun -n <nr of cores> <--other settings> <--other optimisations>

                  Tip

                  If you plan engaging in parallel programming using MPI, this book may prove useful: Parallel Programming with MPI. Peter Pacheo. Morgan Kaufmann. 1996.

                  "}, {"location": "multi_job_submission/", "title": "Multi-job submission", "text": "

                  A frequent occurring characteristic of scientific computation is their focus on data intensive processing. A typical example is the iterative evaluation of a program over different input parameter values, often referred to as a \"parameter sweep\". A Parameter Sweep runs a job a specified number of times, as if we sweep the parameter values through a user defined range.

                  Users then often want to submit a large numbers of jobs based on the same job script but with (i) slightly different parameters settings or with (ii) different input files.

                  These parameter values can have many forms, we can think about a range (e.g., from 1 to 100), or the parameters can be stored line by line in a comma-separated file. The users want to run their job once for each instance of the parameter values.

                  One option could be to launch a lot of separate individual small jobs (one for each parameter) on the cluster, but this is not a good idea. The cluster scheduler isn't meant to deal with tons of small jobs. Those huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as \"job arrays\" existed to allow the creation of multiple jobs with one qsub command, but is not supported by Moab, the current scheduler.

                  The \"Worker framework\" has been developed to address this issue.

                  It can handle many small jobs determined by:

                  parameter variations

                  i.e., many small jobs determined by a specific parameter set which is stored in a .csv (comma separated value) input file.

                  job arrays

                  i.e., each individual job got a unique numeric identifier.

                  Both use cases often have a common root: the user wants to run a program with a large number of parameter settings, and the program does not allow for aggregation, i.e., it has to be run once for each instance of the parameter values.

                  However, the Worker Framework's scope is wider: it can be used for any scenario that can be reduced to a MapReduce approach.1

                  "}, {"location": "multi_job_submission/#the-worker-framework-parameter-sweeps", "title": "The worker Framework: Parameter Sweeps", "text": "

                  First go to the right directory:

                  cd ~/examples/Multi-job-submission/par_sweep\n

                  Suppose the user wishes to run the \"weather\" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:

                  $ ./weather -t 20 -p 1.05 -v 4.3\nT: 20  P: 1.05  V: 4.3\n

                  For the purpose of this exercise, the weather program is just a simple bash script, which prints the 3 variables to the standard output and waits a bit:

                  par_sweep/weather
                  #!/bin/bash\n# Here you could do your calculations\necho \"T: $2  P: $4  V: $6\"\nsleep 100\n

                  A job script that would run this as a job for the first parameters (p01) would then look like:

                  par_sweep/weather_p01.pbs
                  #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=01:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t 20 -p 1.05 -v 4.3\n

                  When submitting this job, the calculation is performed or this particular instance of the parameters, i.e., temperature = 20, pressure = 1.05, and volume = 4.3.

                  To submit the job, the user would use:

                   $ qsub weather_p01.pbs\n
                  However, the user wants to run this program for many parameter instances, e.g., he wants to run the program on 100 instances of temperature, pressure and volume. The 100 parameter instances can be stored in a comma separated value file (.csv) that can be generated using a spreadsheet program such as Microsoft Excel or RDBMS or just by hand using any text editor (do not use a word processor such as Microsoft Word). The first few lines of the file \"data.csv\" would look like:

                  $ more data.csv\ntemperature, pressure, volume\n293, 1.0e5, 107\n294, 1.0e5, 106\n295, 1.0e5, 105\n296, 1.0e5, 104\n297, 1.0e5, 103\n...\n

                  It has to contain the names of the variables on the first line, followed by 100 parameter instances in the current example.

                  In order to make our PBS generic, the PBS file can be modified as follows:

                  par_sweep/weather.pbs
                  #!/bin/bash\n\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\n\ncd $PBS_O_WORKDIR\n./weather -t $temperature -p $pressure -v $volume\n\n# # This script is submitted to the cluster with the following 2 commands:\n# module load worker/1.6.12-foss-2021b\n# wsub -data data.csv -batch weather.pbs\n

                  Note that:

                  1. the parameter values 20, 1.05, 4.3 have been replaced by variables $temperature, $pressure and $volume respectively, which were being specified on the first line of the \"data.csv\" file;

                  2. the number of processors per node has been increased to 8 (i.e., ppn=1 is replaced by ppn=8);

                  3. the walltime has been increased to 4 hours (i.e., walltime=00:15:00 is replaced by walltime=04:00:00).

                  The walltime is calculated as follows: one calculation takes 15 minutes, so 100 calculations take 1500 minutes on one CPU. However, this job will use 8 CPUs, so the 100 calculations will be done in 1500/8 = 187.5 minutes, i.e., 4 hours to be on the safe side.

                  The job can now be submitted as follows (to check which worker module to use, see subsection Using explicit version numbers):

                  $ module load worker/1.6.12-foss-2021b\n$ wsub -batch weather.pbs -data data.csv\ntotal number of work items: 41\n123456\n

                  Note that the PBS file is the value of the -batch option. The weather program will now be run for all 100 parameter instances -- 8 concurrently -- until all computations are done. A computation for such a parameter instance is called a work item in Worker parlance.

                  Warning

                  When you attempt to submit a worker job on a non-default cluster, you might encounter an Illegal instruction error. In such cases, the solution is to use a different module swap command. For example, to submit a worker job to the donphan debug cluster from the login nodes, use:

                  module swap env/slurm/donphan\n

                  instead of

                  module swap cluster/donphan\n
                  We recommend using a module swap cluster command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed here.

                  "}, {"location": "multi_job_submission/#the-worker-framework-job-arrays", "title": "The Worker framework: Job arrays", "text": "

                  First go to the right directory:

                  cd ~/examples/Multi-job-submission/job_array\n

                  As a simple example, assume you have a serial program called myprog that you want to run on various input files input[1-100].

                  The following bash script would submit these jobs all one by one:

                  #!/bin/bash\nfor i in `seq 1 100`; do\nqsub -o output $i -i input $i myprog.pbs\ndone\n

                  This, as said before, could be disturbing for the job scheduler.

                  Alternatively, TORQUE provides a feature known as job arrays which allows the creation of multiple, similar jobs with only one qsub command. This feature introduced a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

                  Under TORQUE, the -t range option is used with qsub to specify a job array, where range is a range of numbers (e.g., 1-100 or 2,4-5,7).

                  The details are

                  1. a job is submitted for each number in the range;

                  2. individuals jobs are referenced as jobid-number, and the entire array can be referenced as jobid for easy killing etc.; and

                  3. each job has PBS_ARRAYID set to its number which allows the script/program to specialise for that job

                  The job could have been submitted using:

                  qsub -t 1-100 my_prog.pbs\n

                  The effect was that rather than 1 job, the user would actually submit 100 jobs to the queue system. This was a popular feature of TORQUE, but as this technique puts quite a burden on the scheduler, it is not supported by Moab (the current job scheduler).

                  To support those users who used the feature and since it offers a convenient workflow, the \"worker framework\" implements the idea of \"job arrays\" in its own way.

                  A typical job script for use with job arrays would look like this:

                  job_array/job_array.pbs
                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l walltime=00:15:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\nmy_prog -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

                  In our specific example, we have prefabricated 100 input files in the \"./input\" subdirectory. Each of those files contains a number of parameters for the \"test_set\" program, which will perform some tests with those parameters.

                  Input for the program is stored in files with names such as input_1.dat, input_2.dat, ..., input_100.dat in the ./input subdirectory.

                  $ ls ./input\n...\n$ more ./input/input_99.dat\nThis is input file \\#99\nParameter #1 = 99\nParameter #2 = 25.67\nParameter #3 = Batch\nParameter #4 = 0x562867\n

                  For the sole purpose of this exercise, we have provided a short \"test_set\" program, which reads the \"input\" files and just copies them into a corresponding output file. We even add a few lines to each output file. The corresponding output computed by our \"test_set\" program will be written to the \"./output\" directory in output_1.dat, output_2.dat, ..., output_100.dat. files.

                  job_array/test_set
                  #!/bin/bash\n\n# Check if the output Directory exists\nif [ ! -d \"./output\" ] ; then\nmkdir ./output\nfi\n\n#   Here you could do your calculations...\necho \"This is Job_array #\" $1\necho \"Input File : \" $3\necho \"Output File: \" $5\ncat ./input/$3 | sed -e \"s/input/output/g\" | grep -v \"Parameter\" > ./output/$5\necho \"Calculations done, no results\" >> ./output/$5\n

                  Using the \"worker framework\", a feature akin to job arrays can be used with minimal modifications to the job script:

                  job_array/test_set.pbs
                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\ncd $PBS_O_WORKDIR\nINPUT_FILE=\"input_${PBS_ARRAYID}.dat\"\nOUTPUT_FILE=\"output_${PBS_ARRAYID}.dat\"\n./test_set ${PBS_ARRAYID} -input ${INPUT_FILE}  -output ${OUTPUT_FILE}\n

                  Note that

                  1. the number of CPUs is increased to 8 (ppn=1 is replaced by ppn=8); and

                  2. the walltime has been modified (walltime=00:15:00 is replaced by walltime=04:00:00).

                  The job is now submitted as follows:

                  $ module load worker/1.6.12-foss-2021b\n$ wsub -t 1-100 -batch test_set.pbs\ntotal number of work items: 100\n123456\n

                  The \"test_set\" program will now be run for all 100 input files -- 8 concurrently -- until all computations are done. Again, a computation for an individual input file, or, equivalently, an array id, is called a work item in Worker speak.

                  Note that in contrast to TORQUE job arrays, a worker job array only submits a single job.

                  $ qstat\nJob id          Name          User      Time   Use S Queue\n--------------- ------------- --------- ---- ----- - -----\n123456  test_set.pbs  vsc40000          0 Q\n\nAnd you can now check the generated output files:\n$ more ./output/output_99.dat\nThis is output file #99\nCalculations done, no results\n
                  "}, {"location": "multi_job_submission/#mapreduce-prologues-and-epilogue", "title": "MapReduce: prologues and epilogue", "text": "

                  Often, an embarrassingly parallel computation can be abstracted to three simple steps:

                  1. a preparation phase in which the data is split up into smaller, more manageable chunks;

                  2. on these chunks, the same algorithm is applied independently (these are the work items); and

                  3. the results of the computations on those chunks are aggregated into, e.g., a statistical description of some sort.

                  The Worker framework directly supports this scenario by using a prologue (pre-processing) and an epilogue (post-processing). The former is executed just once before work is started on the work items, the latter is executed just once after the work on all work items has finished. Technically, the master, i.e., the process that is responsible for dispatching work and logging progress, executes the prologue and epilogue.

                  cd ~/examples/Multi-job-submission/map_reduce\n

                  The script \"pre.sh\" prepares the data by creating 100 different input-files, and the script \"post.sh\" aggregates (concatenates) the data.

                  First study the scripts:

                  map_reduce/pre.sh
                  #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./input\" ] ; then\nmkdir ./input\nfi\n\n# Just generate all dummy input files\nfor i in {1..100}; do\necho \"This is input file #$i\" >  ./input/input_$i.dat echo \"Parameter #1 = $i\" >>  ./input/input_$i.dat echo \"Parameter #2 = 25.67\" >>  ./input/input_$i.dat\n  echo \"Parameter #3 = Batch\" >>  ./input/input_$i.dat\n  echo \"Parameter #4 = 0x562867\" >>  ./input/input_$i.dat\ndone\n
                  map_reduce/post.sh
                  #!/bin/bash\n\n# Check if the input Directory exists\nif [ ! -d \"./output\" ] ; then\necho \"The output directory does not exist!\"\nexit\nfi\n\n# Just concatenate all output files\ntouch all_output.txt\nfor i in {1..100}; do\ncat ./output/output_$i.dat >> all_output.txt\ndone\n

                  Then one can submit a MapReduce style job as follows:

                  $ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100\ntotal number of work items: 100\n123456\n$ cat all_output.txt\n...\n$ rm -r -f ./output/\n

                  Note that the time taken for executing the prologue and the epilogue should be added to the job's total walltime.

                  "}, {"location": "multi_job_submission/#some-more-on-the-worker-framework", "title": "Some more on the Worker Framework", "text": ""}, {"location": "multi_job_submission/#using-worker-efficiently", "title": "Using Worker efficiently", "text": "

                  The \"Worker Framework\" is implemented using MPI, so it is not restricted to a single compute nodes, it scales well to multiple nodes. However, remember that jobs requesting a large number of nodes typically spend quite some time in the queue.

                  The \"Worker Framework\" will be effective when

                  1. work items, i.e., individual computations, are neither too short, nor too long (i.e., from a few minutes to a few hours); and,

                  2. when the number of work items is larger than the number of CPUs involved in the job (e.g., more than 30 for 8 CPUs).

                  "}, {"location": "multi_job_submission/#monitoring-a-worker-job", "title": "Monitoring a worker job", "text": "

                  Since a Worker job will typically run for several hours, it may be reassuring to monitor its progress. Worker keeps a log of its activity in the directory where the job was submitted. The log's name is derived from the job's name and the job's ID, i.e., it has the form <jobname>.log<jobid>. For the running example, this could be run.pbs.log123456, assuming the job's ID is 123456. To keep an eye on the progress, one can use:

                  tail -f run.pbs.log123456\n

                  Alternatively, wsummarize, a Worker command that summarises a log file, can be used:

                  watch -n 60 wsummarize run.pbs.log123456\n

                  This will summarise the log file every 60 seconds.

                  "}, {"location": "multi_job_submission/#time-limits-for-work-items", "title": "Time limits for work items", "text": "

                  Sometimes, the execution of a work item takes longer than expected, or worse, some work items get stuck in an infinite loop. This situation is unfortunate, since it implies that work items that could successfully execute are not even started. Again, the Worker framework offers a simple and yet versatile solution. If we want to limit the execution of each work item to at most 20 minutes, this can be accomplished by modifying the script of the running example.

                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=04:00:00\nmodule load timedrun/1.0\ncd $PBS_O_WORKDIR\ntimedrun -t 00:20:00 weather -t $temperature  -p $pressure  -v $volume\n

                  Note that it is trivial to set individual time constraints for work items by introducing a parameter, and including the values of the latter in the CSV file, along with those for the temperature, pressure and volume.

                  Also note that \"timedrun\" is in fact offered in a module of its own, so it can be used outside the Worker framework as well.

                  "}, {"location": "multi_job_submission/#resuming-a-worker-job", "title": "Resuming a Worker job", "text": "

                  Unfortunately, walltime is sometimes underestimated. When using the Worker framework, this implies that not all work items will have been processed. Worker makes it very easy to resume such a job without having to figure out which work items did complete successfully, and which remain to be computed. Suppose the job that did not complete all its work items had ID \"445948\".

                  wresume -jobid 123456\n

                  This will submit a new job that will start to work on the work items that were not done yet. Note that it is possible to change almost all job parameters when resuming, specifically the requested resources such as the number of cores and the walltime.

                  wresume -l walltime=1:30:00 -jobid 123456\n

                  Work items may fail to complete successfully for a variety of reasons, e.g., a data file that is missing, a (minor) programming error, etc. Upon resuming a job, the work items that failed are considered to be done, so resuming a job will only execute work items that did not terminate either successfully, or reporting a failure. It is also possible to retry work items that failed (preferably after the glitch why they failed was fixed).

                  wresume -jobid 123456 -retry\n

                  By default, a job's prologue is not executed when it is resumed, while its epilogue is. \"wresume\" has options to modify this default behaviour.

                  "}, {"location": "multi_job_submission/#further-information", "title": "Further information", "text": "

                  This how-to introduces only Worker's basic features. The wsub command has some usage information that is printed when the -help option is specified:

                  $ wsub -help\n### usage: wsub  -batch &lt;batch-file&gt;          \n#                [-data &lt;data-files&gt;]         \n#                [-prolog &lt;prolog-file&gt;]      \n#                [-epilog &lt;epilog-file&gt;]      \n#                [-log &lt;log-file&gt;]            \n#                [-mpiverbose]                \n#                [-dryrun] [-verbose]         \n#                [-quiet] [-help]             \n#                [-t &lt;array-req&gt;]             \n#                [&lt;pbs-qsub-options&gt;]\n#\n#   -batch &lt;batch-file&gt;   : batch file template, containing variables to be\n#                           replaced with data from the data file(s) or the\n#                           PBS array request option\n#   -data &lt;data-files&gt;    : comma-separated list of data files (default CSV\n#                           files) used to provide the data for the work\n#                           items\n#   -prolog &lt;prolog-file&gt; : prolog script to be executed before any of the\n#                           work items are executed\n#   -epilog &lt;epilog-file&gt; : epilog script to be executed after all the work\n#                           items are executed\n#   -mpiverbose           : pass verbose flag to the underlying MPI program\n#   -verbose              : feedback information is written to standard error\n#   -dryrun               : run without actually submitting the job, useful\n#   -quiet                : don't show information\n#   -help                 : print this help message\n#   -t &lt;array-req&gt;        : qsub's PBS array request options, e.g., 1-10\n#   &lt;pbs-qsub-options&gt;    : options passed on to the queue submission\n#                           command\n
                  "}, {"location": "multi_job_submission/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "multi_job_submission/#error-an-orte-daemon-has-unexpectedly-failed-after-launch-and-before-communicating-back-to-mpirun", "title": "Error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun", "text": "

                  When submitting a Worker job, you might encounter the following error: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun. This error can occur when the foss toolchain version of worker is loaded. Instead, try loading an iimpi toolchain version of worker.

                  to check for the available versions of worker, use the following command:

                  $ module avail worker\n
                  1. MapReduce: 'Map' refers to the map pattern in which every item in a collection is mapped onto a new value by applying a given function, while \"reduce\" refers to the reduction pattern which condenses or reduces a collection of previously computed results to a single value.\u00a0\u21a9

                  "}, {"location": "mympirun/", "title": "Mympirun", "text": "

                  mympirun is a tool to make it easier for users of HPC clusters to run MPI programs with good performance. We strongly recommend to use mympirun instead of impirun.

                  In this chapter, we give a high-level overview. For a more detailed description of all options, see the vsc-mympirun README.

                  "}, {"location": "mympirun/#basic-usage", "title": "Basic usage", "text": "

                  Before using mympirun, we first need to load its module:

                  module load vsc-mympirun\n

                  As an exception, we don't specify a version here. The reason is that we want to ensure that the latest version of the mympirun script is always used, since it may include important bug fixes or improvements.

                  The most basic form of using mympirun is mympirun [mympirun options] your_program [your_program options].

                  For example, to run a program named example and give it a single argument (5), we can run it with mympirun example 5.

                  "}, {"location": "mympirun/#controlling-number-of-processes", "title": "Controlling number of processes", "text": "

                  There are four options you can choose from to control the number of processes mympirun will start. In the following example, the program mpi_hello prints a single line: Hello world from processor <node> ... (the sourcecode of mpi_hello is available in the vsc-mympirun repository).

                  By default, mympirun starts one process per core on every node you assigned. So if you assigned 2 nodes with 16 cores each, mympirun will start 2 . 16 = 32 test processes in total.

                  "}, {"location": "mympirun/#-hybrid-h", "title": "--hybrid/-h", "text": "

                  This is the most commonly used option for controlling the number of processing.

                  The --hybrid option requires a positive number. This number specifies the number of processes started on each available physical node. It will ignore the number of available cores per node.

                  $ echo $PBS_NUM_NODES\n2\n$ mympirun --hybrid 2 ./mpihello\nHello world from processor node3400.doduo.os, rank 1 out of 4 processors \nHello world from processor node3401.doduo.os, rank 3 out of 4 processors \nHello world from processor node3401.doduo.os, rank 2 out of 4 processors \nHello world from processor node3400.doduo.os, rank 0 out of 4 processors\n
                  "}, {"location": "mympirun/#other-options", "title": "Other options", "text": "

                  There's also --universe, which sets the exact amount of processes started by mympirun; --double, which uses double the amount of processes it normally would; and --multi that does the same as --double, but takes a multiplier (instead of the implied factor 2 with --double).

                  See vsc-mympirun README for a detailed explanation of these options.

                  "}, {"location": "mympirun/#dry-run", "title": "Dry run", "text": "

                  You can do a so-called \"dry run\", which doesn't have any side-effects, but just prints the command that mympirun would execute. You enable this with the --dry-run flag:

                  $ mympirun --dry-run ./mpi_hello\nmpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello\n
                  "}, {"location": "openFOAM/", "title": "OpenFOAM", "text": "

                  In this chapter, we outline best practices for using the centrally provided OpenFOAM installations on the VSC HPC infrastructure.

                  "}, {"location": "openFOAM/#different-openfoam-releases", "title": "Different OpenFOAM releases", "text": "

                  There are currently three different sets of versions of OpenFOAM available, each with its own versioning scheme:

                  • OpenFOAM versions released via http://openfoam.com: v3.0+, v1706

                    • see also http://openfoam.com/history/
                  • OpenFOAM versions released via https://openfoam.org: v4.1, v5.0

                    • see also https://openfoam.org/download/history/
                  • OpenFOAM versions released via http://wikki.gridcore.se/foam-extend: v3.1

                  Make sure you know which flavor of OpenFOAM you want to use, since there are important differences between the different versions w.r.t. features. If the OpenFOAM version you need is not available yet, see I want to use software that is not available on the clusters yet.

                  "}, {"location": "openFOAM/#documentation-training-material", "title": "Documentation & training material", "text": "

                  The best practices outlined here focus specifically on the use of OpenFOAM on the VSC HPC infrastructure. As such, they are intended to augment the existing OpenFOAM documentation rather than replace it. For more general information on using OpenFOAM, please refer to:

                  • OpenFOAM websites:

                    • https://openfoam.com

                    • https://openfoam.org

                    • http://wikki.gridcore.se/foam-extend

                  • OpenFOAM user guides:

                    • https://www.openfoam.com/documentation/user-guide

                    • https://cfd.direct/openfoam/user-guide/

                  • OpenFOAM C++ source code guide: https://cpp.openfoam.org

                  • tutorials: https://wiki.openfoam.com/Tutorials

                  • recordings of \"Introduction to OpenFOAM\" training session at UGent (May 2016): https://www.youtube.com/playlist?list=PLqxhJj6bcnY9RoIgzeF6xDh5L9bbeK3BL

                  Other useful OpenFOAM documentation:

                  • https://github.com/ParticulateFlow/OSCCAR-doc/blob/master/openFoamUserManual_PFM.pdf

                  • http://www.dicat.unige.it/guerrero/openfoam.html

                  "}, {"location": "openFOAM/#preparing-the-environment", "title": "Preparing the environment", "text": "

                  To prepare the environment of your shell session or job for using OpenFOAM, there are a couple of things to take into account.

                  "}, {"location": "openFOAM/#picking-and-loading-an-openfoam-module", "title": "Picking and loading an OpenFOAM module", "text": "

                  First of all, you need to pick and load one of the available OpenFOAM modules. To get an overview of the available modules, run 'module avail OpenFOAM'. For example:

                  $ module avail OpenFOAM\n------------------ /apps/gent/CO7/sandybridge/modules/all ------------------\n   OpenFOAM/v1712-foss-2017b     OpenFOAM/4.1-intel-2017a\n   OpenFOAM/v1712-intel-2017b    OpenFOAM/5.0-intel-2017a\n   OpenFOAM/2.2.2-intel-2017a    OpenFOAM/5.0-intel-2017b\n   OpenFOAM/2.2.2-intel-2018a    OpenFOAM/5.0-20180108-foss-2018a\n   OpenFOAM/2.3.1-intel-2017a    OpenFOAM/5.0-20180108-intel-2017b\n   OpenFOAM/2.4.0-intel-2017a    OpenFOAM/5.0-20180108-intel-2018a\n   OpenFOAM/3.0.1-intel-2016b    OpenFOAM/6-intel-2018a            (D)\n   OpenFOAM/4.0-intel-2016b\n

                  To pick a module, take into account the differences between the different OpenFOAM versions w.r.t. features and API (see also Different OpenFOAM releases). If multiple modules are available that fulfill your requirements, give preference to those providing a more recent OpenFOAM version, and to the ones that were installed with a more recent compiler toolchain; for example, prefer a module that includes intel-2024b in its name over one that includes intel-2024a.

                  To prepare your environment for using OpenFOAM, load the OpenFOAM module you have picked; for example:

                  module load OpenFOAM/11-foss-2023a\n
                  "}, {"location": "openFOAM/#sourcing-the-foam_bash-script", "title": "Sourcing the $FOAM_BASH script", "text": "

                  OpenFOAM provides a script that you should source to further prepare the environment. This script will define some additional environment variables that are required to use OpenFOAM. The OpenFOAM modules define an environment variable named FOAM_BASH that specifies the location to this script. Assuming you are using bash in your shell session or job script, you should always run the following command after loading an OpenFOAM module:

                  source $FOAM_BASH\n
                  "}, {"location": "openFOAM/#defining-utility-functions-used-in-tutorial-cases", "title": "Defining utility functions used in tutorial cases", "text": "

                  If you would like to use the getApplication, runApplication, runParallel, cloneCase and/or compileApplication functions that are used in OpenFOAM tutorials, you also need to source the RunFunctions script:

                  source $WM_PROJECT_DIR/bin/tools/RunFunctions\n

                  Note that this needs to be done after sourcing $FOAM_BASH to make sure $WM_PROJECT_DIR is defined.

                  "}, {"location": "openFOAM/#dealing-with-floating-point-errors", "title": "Dealing with floating-point errors", "text": "

                  If you are seeing Floating Point Exception errors, you can undefine the $FOAM_SIGFPE environment variable that is defined by the $FOAM_BASH script as follows:

                  unset $FOAM_SIGFPE\n

                  Note that this only prevents OpenFOAM from propagating floating point exceptions, which then results in terminating the simulation. However, it does not prevent that illegal operations (like a division by zero) are being executed; if NaN values appear in your results, floating point errors are occurring.

                  As such, you should not use this in production runs. Instead, you should track down the root cause of the floating point errors, and try to prevent them from occurring at all.

                  "}, {"location": "openFOAM/#openfoam-workflow", "title": "OpenFOAM workflow", "text": "

                  The general workflow for OpenFOAM consists of multiple steps. Prior to running the actual simulation, some pre-processing needs to be done:

                  • generate the mesh;

                  • decompose the domain into subdomains using decomposePar (only for parallel OpenFOAM simulations);

                  After running the simulation, some post-processing steps are typically performed:

                  • reassemble the decomposed domain using reconstructPar (only for parallel OpenFOAM simulations, and optional since some postprocessing can also be done on decomposed cases);

                  • evaluate or further process the simulation results, either visually using ParaView (for example, via the paraFoam tool; use paraFoam -builtin for decomposed cases) or using command-line tools like postProcess; see also https://cfd.direct/openfoam/user-guide/postprocessing.

                  Depending on the size of the domain and the desired format of the results, these pre- and post-processing steps can be run either before/after the job running the actual simulation, either on the HPC infrastructure or elsewhere, or as a part of the job that runs the OpenFOAM simulation itself.

                  Do make sure you are using the same OpenFOAM version in each of the steps. Meshing can be done sequentially (i.e., on a single core) using for example blockMesh, or in parallel using more advanced meshing tools like snappyHexMesh, which is highly recommended for large cases. For more details, see https://cfd.direct/openfoam/user-guide/mesh/.

                  One important aspect to keep in mind for 'offline' pre-processing is that the domain decomposition needs to match the number of processor cores that are used for the actual simulation, see also Domain decomposition and number of processor cores.

                  For post-processing you can either download the simulation results to a local workstation, or do the post-processing (interactively) on the HPC infrastructure, for example on the login nodes or using an interactive session on a workernode. This may be interesting to avoid the overhead of downloading the results locally.

                  "}, {"location": "openFOAM/#running-openfoam-in-parallel", "title": "Running OpenFOAM in parallel", "text": "

                  For general information on running OpenFOAM in parallel, see https://cfd.direct/openfoam/user-guide/running-applications-parallel/.

                  "}, {"location": "openFOAM/#the-parallel-option", "title": "The -parallel option", "text": "

                  When running OpenFOAM in parallel, do not forget to specify the -parallel option, to avoid running the same OpenFOAM simulation $N$ times, rather than running it once using $N$ processor cores.

                  You can check whether OpenFOAM was run in parallel in the output of the main command: the OpenFOAM header text should only be included once in the output, and it should specify a value different than '1' in the nProcs field. Note that most pre- and post-processing utilities like blockMesh, decomposePar and reconstructPar can not be run in parallel.

                  "}, {"location": "openFOAM/#using-mympirun", "title": "Using mympirun", "text": "

                  It is highly recommended to use the mympirun command when running parallel OpenFOAM simulations rather than the standard mpirun command; see Mympiprun for more information on mympirun.

                  See Basic usage for how to get started with mympirun.

                  To pass down the environment variables required to run OpenFOAM (which were defined by the $FOAM_BASH script, see Preparing the environment) to each of the MPI processes used in a parallel OpenFOAM execution, the $MYMPIRUN_VARIABLESPREFIX environment variable must be defined as follows, prior to running the OpenFOAM simulation with mympirun:

                  export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n

                  Whenever you are instructed to use a command like mpirun -np <N> ..., use mympirun ... instead; mympirun will automatically detect the number of processor cores that are available (see also Controlling number of processes).

                  "}, {"location": "openFOAM/#domain-decomposition-and-number-of-processor-cores", "title": "Domain decomposition and number of processor cores", "text": "

                  To run OpenFOAM in parallel, you must decompose the domain into multiple subdomains. Each subdomain will be processed by OpenFOAM on one processor core.

                  Since mympirun will automatically use all available cores, you need to make sure that the number of subdomains matches the number of processor cores that will be used by mympirun. If not, you may run into an error message like:

                  number of processor directories = 4 is not equal to the number of processors = 16\n

                  In this case, the case was decomposed in 4 subdomains, while the OpenFOAM simulation was started with 16 processes through mympirun. To match the number of subdomains and the number of processor cores used by mympirun, you should either:

                  • adjust the value for numberOfSubdomains in system/decomposeParDict (and adjust the value for n accordingly in the domain decomposition coefficients), and run decomposePar again; or

                  • submit your job requesting exactly the same number of processor cores as there are subdomains (see the number of processor* directories that were created by decomposePar)

                  See Controlling number of processes to control the number of process mympirun will start.

                  This is interesting if you require more memory per core than is available by default. Note that the decomposition method being used (which is specified in system/decomposeParDict) has significant impact on the performance of a parallel OpenFOAM simulation. Good decomposition methods (like metis or scotch) try to limit communication overhead by minimising the number of processor boundaries.

                  To visualise the processor domains, use the following command:

                  mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(\".*.\")'\n

                  and then load the VTK files generated in the VTK folder into ParaView.

                  "}, {"location": "openFOAM/#running-openfoam-on-a-shared-filesystem", "title": "Running OpenFOAM on a shared filesystem", "text": "

                  OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files (see also http://www.prace-ri.eu/IMG/pdf/IO-profiling_with_Darshan-2.pdf).

                  Take into account the following guidelines for your OpenFOAM jobs, which all relate to input parameters for the OpenFOAM simulation that you can specify in system/controlDict (see also https://cfd.direct/openfoam/user-guide/controldict).

                  • instruct OpenFOAM to write out results at a reasonable frequency, certainly not for every single time step}; you can control this using the writeControl, writeInterval, etc.\u00a0keywords;

                  • consider only retaining results for the last couple of time steps, see the purgeWrite keyword;

                  • consider writing results for only part of the domain (e.g., a line of plane) rather than the entire domain;

                  • if you do not plan to change the parameters of the OpenFOAM simulation while it is running, set runTimeModifiable to false to avoid that OpenFOAM re-reads each of the system/*Dict files at every time step;

                  • if the results per individual time step are large, consider setting writeCompression to true;

                  For modest OpenFOAM simulations where a single workernode suffices, consider using the local disk of the workernode as working directory (accessible via $VSC_SCRATCH_NODE), rather than the shared $VSC_SCRATCH filesystem. **Certainly do not use a subdirectory in $VSC_HOME or $VSC_DATA, since these shared filesystems are too slow for these types of workloads.

                  For large parallel OpenFOAM simulations on the UGent Tier-2 clusters, consider using the alternative shared scratch filesystem $VSC_SCRATCH_ARCANINE (see Pre-defined user directories).

                  These guidelines are especially important for large-scale OpenFOAM simulations that involve more than a couple of dozen of processor cores.

                  "}, {"location": "openFOAM/#using-own-solvers-with-openfoam", "title": "Using own solvers with OpenFOAM", "text": "

                  See https://cfd.direct/openfoam/user-guide/compiling-applications/.

                  "}, {"location": "openFOAM/#example-openfoam-job-script", "title": "Example OpenFOAM job script", "text": "

                  Example job script for damBreak OpenFOAM tutorial (see also https://cfd.direct/openfoam/user-guide/dambreak):

                  OpenFOAM_damBreak.sh
                  #!/bin/bash\n#PBS -l walltime=1:0:0\n#PBS -l nodes=1:ppn=4\n# check for more recent OpenFOAM modules with 'module avail OpenFOAM'\nmodule load OpenFOAM/6-intel-2018a\nsource $FOAM_BASH\n# purposely not specifying a particular version to use most recent mympirun\nmodule load vsc-mympirun\n# let mympirun pass down relevant environment variables to MPI processes\nexport MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI\n# set up working directory\n# (uncomment one line defining $WORKDIR below)\n#export WORKDIR=$VSC_SCRATCH/$PBS_JOBID  # for small multi-node jobs\n#export WORKDIR=$VSC_SCRATCH_ARCANINE/$PBS_JOBID  # for large multi-node jobs (not on available victini)\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID  # for single-node jobs\nmkdir -p $WORKDIR\n# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak\ncp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR\ncd $WORKDIR/damBreak\necho \"working directory: $PWD\"\n# pre-processing: generate mesh\necho \"start blockMesh: $(date)\"\nblockMesh &> blockMesh.out\n# pre-processing: decompose domain for parallel processing\necho \"start decomposePar: $(date)\"\ndecomposePar &> decomposePar.out\n# run OpenFOAM simulation in parallel\n# note:\n#  * the -parallel option is strictly required to actually run in parallel!\n#    without it, the simulation is run N times on a single core...\n#  * mympirun will use all available cores in the job by default,\n#    you need to make sure this matches the number of subdomains!\necho \"start interFoam: $(date)\"\nmympirun --output=interFoam.out interFoam -parallel\n# post-processing: reassemble decomposed domain\necho \"start reconstructPar: $(date)\"\nreconstructPar &> reconstructPar.out\n# copy back results, i.e. all time step directories: 0, 0.05, ..., 1.0 and inputs\nexport RESULTS_DIR=$VSC_DATA/results/$PBS_JOBID\nmkdir -p $RESULTS_DIR\ncp -a *.out [0-9.]* constant system $RESULTS_DIR\necho \"results copied to $RESULTS_DIR at $(date)\"\n# clean up working directory\ncd $HOME\nrm -rf $WORKDIR\n
                  "}, {"location": "program_examples/", "title": "Program examples", "text": "

                  If you have not done so already copy our examples to your home directory by running the following command:

                   cp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  ~(tilde) refers to your home directory, the directory you arrive by default when you login.

                  Go to our examples:

                  cd ~/examples/Program-examples\n

                  Here, we just have put together a number of examples for your convenience. We did an effort to put comments inside the source files, so the source code files are (should be) self-explanatory.

                  1. 01_Python

                  2. 02_C_C++

                  3. 03_Matlab

                  4. 04_MPI_C

                  5. 05a_OMP_C

                  6. 05b_OMP_FORTRAN

                  7. 06_NWChem

                  8. 07_Wien2k

                  9. 08_Gaussian

                  10. 09_Fortran

                  11. 10_PQS

                  The above 2 OMP directories contain the following examples:

                  C Files Fortran Files Description omp_hello.c omp_hello.f Hello world omp_workshare1.c omp_workshare1.f Loop work-sharing omp_workshare2.c omp_workshare2.f Sections work-sharing omp_reduction.c omp_reduction.f Combined parallel loop reduction omp_orphan.c omp_orphan.f Orphaned parallel loop reduction omp_mm.c omp_mm.f Matrix multiply omp_getEnvInfo.c omp_getEnvInfo.f Get and print environment information omp_bug* omp_bug* Programs with bugs and their solution

                  Compile by any of the following commands:

                  Language Commands C: icc -openmp omp_hello.c -o hello pgcc -mp omp_hello.c -o hello gcc -fopenmp omp_hello.c -o hello Fortran: ifort -openmp omp_hello.f -o hello pgf90 -mp omp_hello.f -o hello gfortran -fopenmp omp_hello.f -o hello

                  Be invited to explore the examples.

                  "}, {"location": "quick_reference_guide/", "title": "HPC Quick Reference Guide", "text": "

                  Remember to substitute the usernames, login nodes, file names, ...for your own.

                  Login Login ssh vsc40000@login.hpc.ugent.be Where am I? hostname Copy to HPC scp foo.txt vsc40000@login.hpc.ugent.be: Copy from HPC scp vsc40000@login.hpc.ugent.be:foo.txt Setup ftp session sftp vsc40000@login.hpc.ugent.be Modules List all available modules Module avail List loaded modules module list Load module module load example Unload module module unload example Unload all modules module purge Help on use of module module help Command Description qsub script.pbs Submit job with job script script.pbs qstat 12345 Status of job with ID 12345 qstat -n 12345 Show compute node of job with ID 12345 qdel 12345 Delete job with ID 12345 qstat Status of all your jobs qstat -na Detailed status of your jobs + a list of nodes they are running on qsub -I Submit Interactive job Disk quota Check your disk quota see https://account.vscentrum.be Disk usage in current directory (.) du -h Worker Framework Load worker module module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/ Submit parameter sweep wsub -batch weather.pbs -data data.csv Submit job array wsub -t 1-100 -batch test_set.pbs Submit job array with prolog and epilog wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100"}, {"location": "rhel9/", "title": "Migration to RHEL 9 operation system (Tier-2)", "text": "

                  Starting September 2024 we will gradually migrate the HPC-UGent Tier-2 clusters that are using RHEL 8 as operating system (OS) to RHEL 9 (Red Hat Enterprise Linux 9). This includes clusters skitty, joltik, doduo, accelgor, donphan and gallade (see also the infrastructure overview), as well as switching the Tier-2 login nodes to new ones running RHEL 9.

                  "}, {"location": "rhel9/#motivation", "title": "Motivation", "text": "

                  Migrating to RHEL 8 is done to bring all clusters in line with the most recent cluster that are already running RHEL 9 (shinx).

                  This makes the maintenance of the HPC-UGent Tier-2 infrastructure significantly easier, since we only need to take into account a single operating system version going forward.

                  It will also bring you the latest versions in operating system software, with more features, performance improvements, and enhanced security.

                  "}, {"location": "rhel9/#login_nodes_impact", "title": "Impact on the HPC-UGent Tier-2 login nodes", "text": "

                  As a general rule, the OS of the login node should match the OS of the cluster you are running on. To make this more transparent, you will be warned when loading a cluster module for a cluster than is running an OS that is different than that of the login node you are on.

                  For example, on the current login nodes (gligar07 + gligar08) which are still using RHEL 8, you will see a warning like:

                  $ module swap cluster/shinx\n...\nWe advise you to log in to a RHEL 9 login node when using the shinx cluster.\nThe shinx cluster is using RHEL 9 as operating system,\nwhile the login node you are logged in to is using RHEL 8.\nTo avoid problems with testing installed software or submitting jobs,\nit is recommended to switch to a RHEL 9 login node by running 'ssh login9'.\n

                  Initially there will be only one RHEL 9 login node. As needed a second one will be added.

                  When the default cluster (doduo) is migrated to RHEL 9 the corresponding login nodes will also become default when you log in via login.hpc.ugent.be When they are no longer needed the RHEL 8 login nodes will be shut down.

                  "}, {"location": "rhel9/#login_nodes_limits", "title": "User limits (CPU time, memory, ...)", "text": "

                  To encourage only using the login nodes as an entry point to the HPC-UGent infrastructure, user limits will be enforced on the RHEL 9 login nodes. This was already the case for the RHEL 8 login nodes, but the limits are a bit stricter now.

                  This includes (per user):

                  • max. of 2 CPU cores in use
                  • max. 8 GB of memory in use

                  For more intensive tasks you can use the interactive and debug clusters through the web portal.

                  "}, {"location": "rhel9/#software_impact", "title": "Impact on central software stack", "text": "

                  The migration to RHEL 8 as operating system should not impact your workflow, everything will basically be working as it did before (incl. job submission, etc.).

                  However, there will be impact on the availability of software that is made available via modules.

                  Software that was installed with an older compiler toolchain will no longer be available once the clusters have been updated to RHEL 9.

                  This includes all software installations on top of a compiler toolchain that is older than:

                  • GCC(core)/12.3.0
                  • foss/2023a
                  • intel/2023a
                  • gompi/2023a
                  • iimpi/2023a
                  • gfbf/2023a

                  (or another toolchain with a year-based version older than 2023a)

                  The module command will produce a clear warning when you are loading modules that are using a toolchain that will no longer be available after the cluster has been migrated to RHEL 9. For example:

                  foss/2022b:\n   ___________________________________\n  /  This module will soon no longer  \\\n  \\  be available on this cluster!    /\n   -----------------------------------\n         \\   ^__^\n          \\  (xx)\\_______\n             (__)\\       )\\/\\\n              U  ||----w |\n                 ||     ||\n\nOnly modules installed with a recent toolchain will still be available\nwhen this cluster has been migrated to the RHEL 9 operating system.\nRecent toolchains include GCC(core)/12.3.0, gompi/2023a, foss/2023a,\niimpi/2023a, intel/2023a, gfbf/2023a, and newer versions.\n\nYou should update your workflow or job script to use more recent software\ninstallations, or accept that the modules you currently rely on will soon\nno longer be available.\n\nTo request a more recent version of the software you are using,\nplease submit a software installation request via:\n\nhttps://www.ugent.be/hpc/en/support/software-installation-request\n\nThe HPC-UGent Tier-2 clusters running RHEL 8 will be migrated to RHEL 9.\n\nFor more information, see https://docs.hpc.ugent.be/rhel9/\n\nIf you have any questions, please contact hpc@ugent.be .\n

                  If you require software that is currently only available with an older toolchain on the HPC-UGent Tier-2 clusters that are still running RHEL 8, check via module avail if a more recent version is installed that you can switch to, or submit a software installation request so we can provide a more recent installation of that software which you can adopt.

                  It is a good idea to test your software on the shinx cluster, which is already running RHEL 9 as operating system, to be sure if it still works. We will provide more RHEL 9 nodes on other clusters to test on soon.

                  "}, {"location": "rhel9/#planning", "title": "Planning", "text": "

                  We plan to migrate the HPC-UGent Tier-2 clusters that are still using RHEL 8 to RHEL 9 one by one, following the schedule outlined below.

                  cluster migration start migration completed on skitty Monday 30 September 2024 joltik October 2024 accelgor November 2024 gallade December 2024 donphan February 2025 doduo (default cluster) February 2025 login nodes switch February 2025

                  Migration the donphan and doduo clusters to RHEL 9 and switching login.hpc.ugent.be to RHEL 9 login nodes will be done at the same time.

                  We will keep this page up to date when more specific dates have been planned.

                  Warning

                  This planning below is subject to change, some clusters may get migrated later than originally planned.

                  Please check back regularly.

                  "}, {"location": "rhel9/#questions", "title": "Questions", "text": "

                  If you have any questions related to the migration to the RHEL 9 operating system, please contact the HPC-UGent team.

                  "}, {"location": "running_batch_jobs/", "title": "Running batch jobs", "text": "

                  In order to have access to the compute nodes of a cluster, you have to use the job system. The system software that handles your batch jobs consists of two pieces: the queue- and resource manager TORQUE and the scheduler Moab. Together, TORQUE and Moab provide a suite of commands for submitting jobs, altering some of the properties of waiting jobs (such as reordering or deleting them), monitoring their progress and killing ones that are having problems or are no longer needed. Only the most commonly used commands are mentioned here.

                  When you connect to the HPC, you have access to (one of) the login nodes of the cluster. There you can prepare the work you want to get done on the cluster by, e.g., installing or compiling programs, setting up data sets, etc. The computations however, should not be performed on this login node. The actual work is done on the cluster's compute nodes. Each compute node contains a number of CPU cores. The compute nodes are managed by the job scheduling software (Moab) and a Resource Manager (TORQUE), which decides when and on which compute nodes the jobs can run. It is usually not necessary to log on to the compute nodes directly and is only allowed on the nodes where you have a job running . Users can (and should) monitor their jobs periodically as they run, but do not have to remain connected to the HPC the entire time.

                  The documentation in this \"Running batch jobs\" section includes a description of the general features of job scripts, how to submit them for execution and how to monitor their progress.

                  "}, {"location": "running_batch_jobs/#modules", "title": "Modules", "text": "

                  Software installation and maintenance on a HPC cluster such as the VSC clusters poses a number of challenges not encountered on a workstation or a departmental cluster. We therefore need a system on the HPC, which is able to easily activate or deactivate the software packages that you require for your program execution.

                  "}, {"location": "running_batch_jobs/#environment-variables", "title": "Environment Variables", "text": "

                  The program environment on the HPC is controlled by pre-defined settings, which are stored in environment (or shell) variables. For more information about environment variables, see the chapter \"Getting started\", section \"Variables\" in the intro to Linux.

                  All the software packages that are installed on the HPC cluster require different settings. These packages include compilers, interpreters, mathematical software such as MATLAB and SAS, as well as other applications and libraries.

                  "}, {"location": "running_batch_jobs/#the-module-command", "title": "The module command", "text": "

                  In order to administer the active software and their environment variables, the module system has been developed, which:

                  1. Activates or deactivates software packages and their dependencies.

                  2. Allows setting and unsetting of environment variables, including adding and deleting entries from list-like environment variables.

                  3. Does this in a shell-independent fashion (necessary information is stored in the accompanying module file).

                  4. Takes care of versioning aspects: For many libraries, multiple versions are installed and maintained. The module system also takes care of the versioning of software packages. For instance, it does not allow multiple versions to be loaded at same time.

                  5. Takes care of dependencies: Another issue arises when one considers library versions and the dependencies they require. Some software requires an older version of a particular library to run correctly (or at all). Hence a variety of version numbers is available for important libraries. Modules typically load the required dependencies automatically.

                  This is all managed with the module command, which is explained in the next sections.

                  There is also a shorter ml command that does exactly the same as the module command and is easier to type. Whenever you see a module command, you can replace module with ml.

                  "}, {"location": "running_batch_jobs/#available-modules", "title": "Available modules", "text": "

                  A large number of software packages are installed on the HPC clusters. A list of all currently available software can be obtained by typing:

                  module available\n

                  It's also possible to execute module av or module avail, these are shorter to type and will do the same thing.

                  This will give some output such as:

                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n

                  This gives a full list of software packages that can be loaded.

                  The casing of module names is important: lowercase and uppercase letters matter in module names.

                  "}, {"location": "running_batch_jobs/#organisation-of-modules-in-toolchains", "title": "Organisation of modules in toolchains", "text": "

                  The amount of modules on the VSC systems can be overwhelming, and it is not always immediately clear which modules can be loaded safely together if you need to combine multiple programs in a single job to get your work done.

                  Therefore the VSC has defined so-called toolchains. A toolchain contains a C/C++ and Fortran compiler, a MPI library and some basic math libraries for (dense matrix) linear algebra and FFT. Two toolchains are defined on most VSC systems. One, the intel toolchain, consists of the Intel compilers, MPI library and math libraries. The other one, the foss toolchain, consists of Open Source components: the GNU compilers, OpenMPI, OpenBLAS and the standard LAPACK and ScaLAPACK libraries for the linear algebra operations and the FFTW library for FFT. The toolchains are refreshed twice a year, which is reflected in their name.

                  E.g., foss/2024a is the first version of the foss toolchain in 2024.

                  The toolchains are then used to compile a lot of the software installed on the VSC clusters. You can recognise those packages easily as they all contain the name of the toolchain after the version number in their name (e.g., Python/2.7.12-intel-2016b). Only packages compiled with the same toolchain name and version can work together without conflicts.

                  "}, {"location": "running_batch_jobs/#loading-and-unloading-modules", "title": "Loading and unloading modules", "text": ""}, {"location": "running_batch_jobs/#module-load", "title": "module load", "text": "

                  To \"activate\" a software package, you load the corresponding module file using the module load command:

                  module load example\n

                  This will load the most recent version of example.

                  For some packages, multiple versions are installed; the load command will automatically choose the default version (if it was set by the system administrators) or the most recent version otherwise (i.e., the lexicographical last after the /).

                  **However, you should specify a particular version to avoid surprises when newer versions are installed:

                  module load secondexample/2.7-intel-2016b\n

                  The ml command is a shorthand for module load: ml example/1.2.3 is equivalent to module load example/1.2.3.

                  Modules need not be loaded one by one; the two module load commands can be combined as follows:

                  module load example/1.2.3 secondexample/2.7-intel-2016b\n

                  This will load the two modules as well as their dependencies (unless there are conflicts between both modules).

                  "}, {"location": "running_batch_jobs/#module-list", "title": "module list", "text": "

                  Obviously, you need to be able to keep track of the modules that are currently loaded. Assuming you have run the module load commands stated above, you will get the following:

                  $ module list\nCurrently Loaded Modulefiles: \n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

                  You can also just use the ml command without arguments to list loaded modules.

                  It is important to note at this point that other modules (e.g., intel/2016b) are also listed, although the user did not explicitly load them. This is because secondexample/2.7-intel-2016b depends on it (as indicated in its name), and the system administrator specified that the intel/2016b module should be loaded whenever this secondexample module is loaded. There are advantages and disadvantages to this, so be aware of automatically loaded modules whenever things go wrong: they may have something to do with it!

                  "}, {"location": "running_batch_jobs/#module-unload", "title": "module unload", "text": "

                  To unload a module, one can use the module unload command. It works consistently with the load command, and reverses the latter's effect. However, the dependencies of the package are NOT automatically unloaded; you will have to unload the packages one by one. When the secondexample module is unloaded, only the following modules remain:

                  $ module unload secondexample\n$ module list\nCurrently Loaded Modulefiles: \nCurrently Loaded Modulefiles: \n1) example/1.2.3                        5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26 \n2) GCCcore/5.4.0                        6) imkl/11.3.3.210-iimpi-2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26        7) intel/2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26      8) examplelib/1.2-intel-2016b\n

                  To unload the secondexample module, you can also use ml -secondexample.

                  Notice that the version was not specified: there can only be one version of a module loaded at a time, so unloading modules by name is not ambiguous. However, checking the list of currently loaded modules is always a good idea, since unloading a module that is currently not loaded will not result in an error.

                  "}, {"location": "running_batch_jobs/#purging-all-modules", "title": "Purging all modules", "text": "

                  In order to unload all modules at once, and hence be sure to start in a clean state, you can use:

                  module purge\n
                  This is always safe: the cluster module (the module that specifies which cluster jobs will get submitted to) will not be unloaded (because it's a so-called \"sticky\" module).

                  "}, {"location": "running_batch_jobs/#using-explicit-version-numbers", "title": "Using explicit version numbers", "text": "

                  Once a module has been installed on the cluster, the executables or libraries it comprises are never modified. This policy ensures that the user's programs will run consistently, at least if the user specifies a specific version. Failing to specify a version may result in unexpected behaviour.

                  Consider the following example: the user decides to use the example module and at that point in time, just a single version 1.2.3 is installed on the cluster. The user loads the module using:

                  module load example\n

                  rather than

                  module load example/1.2.3\n

                  Everything works fine, up to the point where a new version of example is installed, 4.5.6. From then on, the user's load command will load the latter version, rather than the intended one, which may lead to unexpected problems. See for example the following section on Module Conflicts.

                  Consider the following example modules:

                  $ module avail example/\nexample/1.2.3 \nexample/4.5.6\n

                  Let's now generate a version conflict with the example module, and see what happens.

                  $ module av example/\nexample/1.2.3       example/4.5.6\n$ module load example/1.2.3  example/4.5.6\nLmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').\n$ module swap example/4.5.6\n

                  Note: A module swap command combines the appropriate module unload and module load commands.

                  "}, {"location": "running_batch_jobs/#search-for-modules", "title": "Search for modules", "text": "

                  With the module spider command, you can search for modules:

                  $ module spider example\n--------------------------------------------------------------------------------\n  example:\n--------------------------------------------------------------------------------\n    Description: \n        This is just an example\n\n    Versions: \n        example/1.2.3 \n        example/4.5.6\n--------------------------------------------------------------------------------\n  For detailed information about a specific \"example\" module (including how to load the modules) use the module's full name. \n  For example:\n\n    module spider example/1.2.3\n--------------------------------------------------------------------------------\n

                  It's also possible to get detailed information about a specific module:

                  $ module spider example/1.2.3\n------------------------------------------------------------------------------------------\n  example: example/1.2.3\n------------------------------------------------------------------------------------------\n  Description: \n    This is just an example \n\n    You will need to load all module(s) on any one of the lines below before the \"example/1.2.3\" module is available to load.\n\n        cluster/accelgor\n        cluster/doduo \n        cluster/donphan\n        cluster/gallade\n        cluster/joltik \n        cluster/skitty\nHelp:\n\n        Description \n        =========== \n        This is just an example\n\n        More information \n        ================ \n         - Homepage: https://example.com\n
                  "}, {"location": "running_batch_jobs/#get-detailed-info", "title": "Get detailed info", "text": "

                  To get a list of all possible commands, type:

                  module help\n

                  Or to get more information about one specific module package:

                  $ module help example/1.2.3\n----------- Module Specific Help for 'example/1.2.3' --------------------------- \n  This is just an example - Homepage: https://example.com/\n
                  "}, {"location": "running_batch_jobs/#save-and-load-collections-of-modules", "title": "Save and load collections of modules", "text": "

                  If you have a set of modules that you need to load often, you can save these in a collection. This will enable you to load all the modules you need with a single command.

                  In each module command shown below, you can replace module with ml.

                  First, load all modules you want to include in the collections:

                  module load example/1.2.3 secondexample/2.7-intel-2016b\n

                  Now store it in a collection using module save. In this example, the collection is named my-collection.

                  module save my-collection\n

                  Later, for example in a jobscript or a new session, you can load all these modules with module restore:

                  module restore my-collection\n

                  You can get a list of all your saved collections with the module savelist command:

                  $ module savelist\nNamed collection list (For LMOD_SYSTEM_NAME = \"CO7-sandybridge\"):\n  1) my-collection\n

                  To get a list of all modules a collection will load, you can use the module describe command:

                  $ module describe my-collection\n1) example/1.2.3                                        6) imkl/11.3.3.210-iimpi-2016b \n2) GCCcore/5.4.0                                        7) intel/2016b \n3) icc/2016.3.210-GCC-5.4.0-2.26                        8) examplelib/1.2-intel-2016b \n4) ifort/2016.3.210-GCC-5.4.0-2.26                      9) secondexample/2.7-intel-2016b \n5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26\n

                  To remove a collection, remove the corresponding file in $HOME/.lmod.d:

                  rm $HOME/.lmod.d/my-collection\n
                  "}, {"location": "running_batch_jobs/#getting-module-details", "title": "Getting module details", "text": "

                  To see how a module would change the environment, you can use the module show command:

                  $ module show Python/2.7.12-intel-2016b\nwhatis(\"Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ \") \nconflict(\"Python\")\nload(\"intel/2016b\") \nload(\"bzip2/1.0.6-intel-2016b\") \n...\nprepend_path(...)\nsetenv(\"EBEXTSLISTPYTHON\",\"setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4\", ...)\n

                  It's also possible to use the ml show command instead: they are equivalent.

                  Here you can see that the Python/2.7.12-intel-2016b comes with a whole bunch of extensions: numpy, scipy, ...

                  You can also see the modules the Python/2.7.12-intel-2016b module loads: intel/2016b, bzip2/1.0.6-intel-2016b, ...

                  If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.

                  "}, {"location": "running_batch_jobs/#getting-system-information-about-the-hpc-infrastructure", "title": "Getting system information about the HPC infrastructure", "text": ""}, {"location": "running_batch_jobs/#checking-the-general-status-of-the-hpc-infrastructure", "title": "Checking the general status of the HPC infrastructure", "text": "

                  To check the general system state, check https://www.ugent.be/hpc/en/infrastructure/status. This has information about scheduled downtime, status of the system, ...

                  "}, {"location": "running_batch_jobs/#getting-cluster-state", "title": "Getting cluster state", "text": "

                  You can check http://hpc.ugent.be/clusterstate to see information about the clusters: you can see the nodes that are down, free, partially filled with jobs, completely filled with jobs, ....

                  You can also get this information in text form (per cluster separately) with the pbsmon command:

                  $ module swap cluster/donphan\n$ pbsmon\n 4001 4002 4003 4004 4005 4006 4007\n    _    j    j    j    _    _    .\n\n 4008 4009 4010 4011 4012 4013 4014\n    _    _    .    _    _    _    _\n\n 4015 4016\n    _    _\n\n   _ free                 : 11  |   X down                 : 0   |\n   j partial              : 3   |   x down_on_error        : 0   |\n   J full                 : 0   |   m maintenance          : 0   |\n                                |   . offline              : 2   |\n                                |   o other (R, *, ...)    : 0   |\n\nNode type:\n ppn=36, mem=751GB\n

                  pbsmon only outputs details of the cluster corresponding to the currently loaded cluster module see the section on Specifying the cluster on which to run. It also shows details about the nodes in a cluster. In the example, all nodes have 36 cores and 751 GB of memory.

                  "}, {"location": "running_batch_jobs/#defining-and-submitting-your-job", "title": "Defining and submitting your job", "text": "

                  Usually, you will want to have your program running in batch mode, as opposed to interactively as you may be accustomed to. The point is that the program must be able to start and run without user intervention, i.e., without you having to enter any information or to press any buttons during program execution. All the necessary input or required options have to be specified on the command line, or needs to be put in input or configuration files.

                  As an example, we will run a Perl script, which you will find in the examples subdirectory on the HPC. When you received an account to the HPC a subdirectory with examples was automatically generated for you.

                  Remember that you have copied the contents of the HPC examples directory to your home directory, so that you have your own personal copy (editable and over-writable) and that you can start using the examples. If you haven't done so already, run these commands now:

                  cd\ncp -r /apps/gent/tutorials/Intro-HPC/examples ~/\n

                  First go to the directory with the first examples by entering the command:

                  cd ~/examples/Running-batch-jobs\n

                  Each time you want to execute a program on the HPC you'll need 2 things:

                  The executable The program to execute from the end-user, together with its peripheral input files, databases and/or command options.

                  A batch job script , which will define the computer resource requirements of the program, the required additional software packages and which will start the actual executable. The HPC needs to know:

                  1.  the type of compute nodes;\n\n2.  the number of CPUs;\n\n3.  the amount of memory;\n\n4.  the expected duration of the execution time (wall time: Time as\n    measured by a clock on the wall);\n\n5.  the name of the files which will contain the output (i.e.,\n    stdout) and error (i.e., stderr) messages;\n\n6.  what executable to start, and its arguments.\n

                  Later on, the HPC user shall have to define (or to adapt) his/her own job scripts. For now, all required job scripts for the exercises are provided for you in the examples subdirectories.

                  List and check the contents with:

                  $ ls -l\ntotal 512\n-rw-r--r-- 1 vsc40000 193 Sep 11 10:34 fibo.pbs\n-rw-r--r-- 1 vsc40000 609 Sep 11 10:25 fibo.pl\n

                  In this directory you find a Perl script (named \"fibo.pl\") and a job script (named \"fibo.pbs\").

                  1. The Perl script calculates the first 30 Fibonacci numbers.

                  2. The job script is actually a standard Unix/Linux shell script that contains a few extra comments at the beginning that specify directives to PBS. These comments all begin with #PBS.

                  We will first execute the program locally (i.e., on your current login-node), so that you can see what the program does.

                  On the command line, you would run this using:

                  $ ./fibo.pl\n[0] -> 0\n[1] -> 1\n[2] -> 1\n[3] -> 2\n[4] -> 3\n[5] -> 5\n[6] -> 8\n[7] -> 13\n[8] -> 21\n[9] -> 34\n[10] -> 55\n[11] -> 89\n[12] -> 144\n[13] -> 233\n[14] -> 377\n[15] -> 610\n[16] -> 987\n[17] -> 1597\n[18] -> 2584\n[19] -> 4181\n[20] -> 6765\n[21] -> 10946\n[22] -> 17711\n[23] -> 28657\n[24] -> 46368\n[25] -> 75025\n[26] -> 121393\n[27] -> 196418\n[28] -> 317811\n[29] -> 514229\n

                  Remark: Recall that you have now executed the Perl script locally on one of the login-nodes of the HPC cluster. Of course, this is not our final intention; we want to run the script on any of the compute nodes. Also, it is not considered as good practice, if you \"abuse\" the login-nodes for testing your scripts and executables. It will be explained later on how you can reserve your own compute-node (by opening an interactive session) to test your software. But for the sake of acquiring a good understanding of what is happening, you are pardoned for this example since these jobs require very little computing power.

                  The job script contains a description of the job by specifying the command that need to be executed on the compute node:

                  fibo.pbs
                  #!/bin/bash -l\ncd $PBS_O_WORKDIR\n./fibo.pl\n

                  So, jobs are submitted as scripts (bash, Perl, Python, etc.), which specify the parameters related to the jobs such as expected runtime (walltime), e-mail notification, etc. These parameters can also be specified on the command line.

                  This job script can now be submitted to the cluster's job system for execution, using the qsub (Queue SUBmit) command:

                  $ qsub fibo.pbs\n123456\n

                  The qsub command returns a job identifier on the HPC cluster. The important part is the number (e.g., \"123456 \"); this is a unique identifier for the job and can be used to monitor and manage your job.

                  Remark: the modules that were loaded when you submitted the job will not be loaded when the job is started. You should always specify the module load statements that are required for your job in the job script itself.

                  To faciliate this, you can use a pre-defined module collection which you can restore using module restore, see the section on Save and load collections of modules for more information.

                  Your job is now waiting in the queue for a free workernode to start on.

                  Go and drink some coffee ...\u00a0but not too long. If you get impatient you can start reading the next section for more information on how to monitor jobs in the queue.

                  After your job was started, and ended, check the contents of the directory:

                  $ ls -l\ntotal 768\n-rw-r--r-- 1 vsc40000 vsc40000   44 Feb 28 13:33 fibo.pbs\n-rw------- 1 vsc40000 vsc40000    0 Feb 28 13:33 fibo.pbs.e123456\n-rw------- 1 vsc40000 vsc40000 1010 Feb 28 13:33 fibo.pbs.o123456\n-rwxrwxr-x 1 vsc40000 vsc40000  302 Feb 28 13:32 fibo.pl\n

                  Explore the contents of the 2 new files:

                  $ more fibo.pbs.o123456\n$ more fibo.pbs.e123456\n

                  These files are used to store the standard output and error that would otherwise be shown in the terminal window. By default, they have the same name as that of the PBS script, i.e., \"fibo.pbs\" as base name, followed by the extension \".o\" (output) and \".e\" (error), respectively, and the job number ('123456' for this example). The error file will be empty, at least if all went well. If not, it may contain valuable information to determine and remedy the problem that prevented a successful run. The standard output file will contain the results of your calculation (here, the output of the Perl script)

                  "}, {"location": "running_batch_jobs/#when-will-my-job-start", "title": "When will my job start?", "text": "

                  In practice it's impossible to predict when your job(s) will start, since most currently running jobs will finish before their requested walltime expires, and new jobs by may be submitted by other users that are assigned a higher priority than your job(s).

                  The HPC-UGent infrastructure clusters use a fair-share scheduling policy (see HPC Policies). There is no guarantee on when a job will start, since it depends on a number of factors. One of these factors is the priority of the job, which is determined by:

                  • Historical use: the aim is to balance usage over users, so infrequent (in terms of total compute time used) users get a higher priority

                  • Requested resources (amount of cores, walltime, memory, ...). The more resources you request, the more likely it is the job(s) will have to wait for a while until those resources become available.

                  • Time waiting in queue: queued jobs get a higher priority over time.

                  • User limits: this avoids having a single user use the entire cluster. This means that each user can only use a part of the cluster.

                  • Whether or not you are a member of a Virtual Organisation (VO).

                    Each VO gets assigned a fair share target, which has a big impact on the job priority. This is done to let the job scheduler balance usage across different research groups.

                    If you are not a member of a specific VO, you are sharing a fair share target with all other users who are not in a specific VO (which implies being in the (hidden) default VO). This can have a (strong) negative impact on the priority of your jobs compared to the jobs of users who are in a specific VO.

                    See Virtual Organisations for more information on how to join a VO, or request the creation of a new VO if there is none yet for your research group.

                  Some other factors are how busy the cluster is, how many workernodes are active, the resources (e.g., number of cores, memory) provided by each workernode, ...

                  It might be beneficial to request less resources (e.g., not requesting all cores in a workernode), since the scheduler often finds a \"gap\" to fit the job into more easily.

                  Sometimes it happens that couple of nodes are free and a job would not start. Empty nodes are not necessary empty for your job(s). Just imagine, that an N-node-job (with a higher priority than your waiting job(s)) should run. It is quite unlikely that N nodes would be empty at the same moment to accommodate this job, so while fewer than N nodes are empty, you can see them as being empty. The moment the Nth node becomes empty the waiting N-node-job will consume these N free nodes.

                  "}, {"location": "running_batch_jobs/#specifying-the-cluster-on-which-to-run", "title": "Specifying the cluster on which to run", "text": "

                  To use other clusters, you can swap the cluster module. This is a special module that change what modules are available for you, and what cluster your jobs will be queued in.

                  By default you are working on doduo. To switch to, e.g., donphan you need to redefine the environment so you get access to all modules installed on the donphan cluster, and to be able to submit jobs to the donphan scheduler so your jobs will start on donphan instead of the default doduo cluster.

                  module swap cluster/donphan\n

                  Note: the donphan modules may not work directly on the login nodes, because the login nodes do not have the same architecture as the donphan cluster, they have the same architecture as the doduo cluster however, so this is why by default software works on the login nodes. See the section on Running software that is incompatible with host for why this is and how to fix this.

                  To list the available cluster modules, you can use the module avail cluster/ command:

                  $ module avail cluster/\n--------------------------------------- /etc/modulefiles/vsc ----------------------------------------\n   cluster/accelgor (S)    cluster/doduo   (S,L)    cluster/gallade (S)    cluster/skitty  (S)\n   cluster/default         cluster/donphan (S)      cluster/joltik  (S)\n\n  Where:\n   S:  Module is Sticky, requires --force to unload or purge\n   L:  Module is loaded\n   D:  Default Module\n\nIf you need software that is not listed, \nrequest it via https://www.ugent.be/hpc/en/support/software-installation-request\n

                  As indicated in the output above, each cluster module is a so-called sticky module, i.e., it will not be unloaded when module purge (see the section on purging modules) is used.

                  The output of the various commands interacting with jobs (qsub, stat, ...) all depend on which cluster module is loaded.

                  "}, {"location": "running_batch_jobs/#submitting-jobs-from-one-cluster-to-another", "title": "Submitting jobs from one cluster to another", "text": "

                  It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. An example of this is the wsub command of worker, see also here.

                  To submit jobs to the donphan cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using module swap env/slurm/donphan instead of using module swap cluster/donphan. The last command also activates the software modules that are installed specifically for donphan, which may not be compatible with the system you are working on. By only swapping to env/slurm/donphan, jobs that are submitted will be sent to the donphan cluster. The same approach can be used to submit jobs to another cluster, of course.

                  Each cluster module not only loads the corresponding env/slurm/... module to control where jobs are sent to, but also two other env/... modules which control other parts of the environment. For example, for the doduo cluster, loading the cluster/doduo module corresponds to loading 3 different env/ modules:

                  env/ module for doduo Purpose env/slurm/doduo Changes $SLURM_CLUSTERS which specifies the cluster where jobs are sent to. env/software/doduo Changes $MODULEPATH, which controls what software modules are available for loading. env/vsc/doduo Changes the set of $VSC_ environment variables that are specific to the doduo cluster

                  We recommend that you do not use these separate env/ modules directly unless you really need to, and only if you understand what they are doing exactly. Since mixing cluster/ and env/ modules of different clusters can result in surprises if you are not careful.

                  We also recommend using a module swap cluster command after submitting the jobs. This to \"reset\" your environment to a sane state.

                  "}, {"location": "running_batch_jobs/#monitoring-and-managing-your-jobs", "title": "Monitoring and managing your job(s)", "text": "

                  Using the job ID that qsub returned, there are various ways to monitor the status of your job. In the following commands, replace 12345 with the job ID qsub returned.

                  qstat 12345\n

                  To show on which compute nodes your job is running, at least, when it is running:

                  qstat -n 12345\n

                  To remove a job from the queue so that it will not run, or to stop a job that is already running.

                  qdel 12345\n

                  When you have submitted several jobs (or you just forgot about the job ID), you can retrieve the status of all your jobs that are submitted and are not yet finished using:

                  $ qstat\n:\nJob ID      Name    User      Time Use S Queue\n----------- ------- --------- -------- - -----\n123456 ....     mpi  vsc40000     0    Q short\n

                  Here:

                  Job ID the job's unique identifier

                  Name the name of the job

                  User the user that owns the job

                  Time Use the elapsed walltime for the job

                  Queue the queue the job is in

                  The state S can be any of the following:

                  State Meaning Q The job is queued and is waiting to start. R The job is currently running. E The job is currently exit after having run. C The job is completed after having run. H The job has a user or system hold on it and will not be eligible to run until the hold is removed.

                  User hold means that the user can remove the hold. System hold means that the system or an administrator has put the job on hold, very likely because something is wrong with it. Check with your helpdesk to see why this is the case.

                  "}, {"location": "running_batch_jobs/#examining-the-queue", "title": "Examining the queue", "text": "

                  There is currently (since May 2019) no way to get an overall view of the state of the cluster queues for the HPC-UGent infrastructure infrastructure, due to changes to the cluster resource management software (and also because a general overview is mostly meaningless since it doesn't give any indication of the resources requested by the queued jobs).

                  "}, {"location": "running_batch_jobs/#specifying-job-requirements", "title": "Specifying job requirements", "text": "

                  Without giving more information about your job upon submitting it with qsub, default values will be assumed that are almost never appropriate for real jobs.

                  It is important to estimate the resources you need to successfully run your program, such as the amount of time the job will require, the amount of memory it needs, the number of CPUs it will run on, etc. This may take some work, but it is necessary to ensure your jobs will run properly.

                  "}, {"location": "running_batch_jobs/#generic-resource-requirements", "title": "Generic resource requirements", "text": "

                  The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.

                  qsub -l walltime=2:30:00 ...\n

                  For the simplest cases, only the amount of maximum estimated execution time (called \"walltime\") is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters).

                  The maximum walltime for HPC-UGent clusters is 72 hours.

                  If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See the section on Running a command with a maximum time limit for how to do this.

                  qsub -l mem=4gb ...\n

                  The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be \"killed\" (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.

                  The default memory reserved for a job on any given HPC-UGent cluster is the \"usable memory per node\" divided by the \"numbers of cores in a node\" multiplied by the requested processor core(s) (ppn). Jobs will request the default memory without defining memory for the job, either as a command line option or as a memory directive in the job script. Please note that using the default memory is recommended. For \"usable memory per node\" and \"number of cores in a node\" please consult https://www.ugent.be/hpc/en/infrastructure.

                  qsub -l nodes=5:ppn=2 ...\n

                  The job requests 5 compute nodes with two cores on each node (ppn stands for \"processors per node\", where \"processors\" here actually means \"CPU cores\").

                  qsub -l nodes=1:westmere\n

                  The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (\"VSC hardware\" section)1 of the VSC website.

                  These options can either be specified on the command line, e.g.

                  qsub -l nodes=1:ppn,mem=2gb fibo.pbs\n

                  or in the job script itself using the #PBS-directive, so \"fibo.pbs\" could be modified to:

                  #!/bin/bash -l\n#PBS -l nodes=1:ppn=1\n#PBS -l mem=2gb\ncd $PBS_O_WORKDIR\n./fibo.pl\n

                  Note that the resources requested on the command line will override those specified in the PBS file.

                  "}, {"location": "running_batch_jobs/#job-output-and-error-files", "title": "Job output and error files", "text": "

                  At some point your job finishes, so you may no longer see the job ID in the list of jobs when you run qstat (since it will only be listed for a few minutes after completion with state \"C\"). After your job finishes, you should see the standard output and error of your job in two files, located by default in the directory where you issued the qsub command.

                  When you navigate to that directory and list its contents, you should see them:

                  $ ls -l\ntotal 1024\n-rw-r--r-- 1 vsc40000  609 Sep 11 10:54 fibo.pl\n-rw-r--r-- 1 vsc40000   68 Sep 11 10:53 fibo.pbs\n-rw------- 1 vsc40000   52 Sep 11 11:03 fibo.pbs.e123456\n-rw------- 1 vsc40000 1307 Sep 11 11:03 fibo.pbs.o123456\n

                  In our case, our job has created both output ('fibo.pbs.') and error files ('fibo.pbs.') containing info written to stdout and stderr respectively.

                  Inspect the generated output and error files:

                  $ cat fibo.pbs.o123456\n...\n$ cat fibo.pbs.e123456\n...\n
                  "}, {"location": "running_batch_jobs/#e-mail-notifications", "title": "E-mail notifications", "text": ""}, {"location": "running_batch_jobs/#generate-your-own-e-mail-notifications", "title": "Generate your own e-mail notifications", "text": "

                  You can instruct the HPC to send an e-mail to your e-mail address whenever a job begins, ends and/or aborts, by adding the following lines to the job script fibo.pbs:

                  #PBS -m b \n#PBS -m e \n#PBS -m a\n

                  or

                  #PBS -m abe\n

                  These options can also be specified on the command line. Try it and see what happens:

                  qsub -m abe fibo.pbs\n

                  The system will use the e-mail address that is connected to your VSC account. You can also specify an alternate e-mail address with the -M option:

                  qsub -m b -M john.smith@example.com fibo.pbs\n

                  will send an e-mail to john.smith@example.com when the job begins.

                  "}, {"location": "running_batch_jobs/#running-a-job-after-another-job", "title": "Running a job after another job", "text": "

                  If you submit two jobs expecting that should be run one after another (for example because the first generates a file the second needs), there might be a problem as they might both be run at the same time.

                  So the following example might go wrong:

                  $ qsub job1.sh\n$ qsub job2.sh\n

                  You can make jobs that depend on other jobs. This can be useful for breaking up large jobs into smaller jobs that can be run in a pipeline. The following example will submit 2 jobs, but the second job (job2.sh) will be held (H status in qstat) until the first job successfully completes. If the first job fails, the second will be cancelled.

                  $ FIRST_ID=$(qsub job1.sh)\n$ qsub -W depend=afterok:$FIRST_ID job2.sh\n

                  afterok means \"After OK\", or in other words, after the first job successfully completed.

                  It's also possible to use afternotok (\"After not OK\") to run the second job only if the first job exited with errors. A third option is to use afterany (\"After any\"), to run the second job after the first job (regardless of success or failure).

                  1. URL: https://vscdocumentation.readthedocs.io/en/latest/hardware.html \u21a9

                  "}, {"location": "running_interactive_jobs/", "title": "Running interactive jobs", "text": ""}, {"location": "running_interactive_jobs/#introduction", "title": "Introduction", "text": "

                  Interactive jobs are jobs which give you an interactive session on one of the compute nodes. Importantly, accessing the compute nodes this way means that the job control system guarantees the resources that you have asked for.

                  Interactive PBS jobs are similar to non-interactive PBS jobs in that they are submitted to PBS via the command qsub. Where an interactive job differs is that it does not require a job script, the required PBS directives can be specified on the command line.

                  Interactive jobs can be useful to debug certain job scripts or programs, but should not be the main use of the HPC-UGent infrastructure. Waiting for user input takes a very long time in the life of a CPU and does not make efficient usage of the computing resources.

                  The syntax for qsub for submitting an interactive PBS job is:

                  $ qsub -I <... pbs directives ...>\n
                  "}, {"location": "running_interactive_jobs/#interactive-jobs-without-x-support", "title": "Interactive jobs, without X support", "text": "

                  Tip

                  Find the code in \"~/examples/Running_interactive_jobs\"

                  First of all, in order to know on which computer you're working, enter:

                  $ hostname -f\ngligar07.gastly.os\n

                  This means that you're now working on the login node gligar07.gastly.os of the cluster.

                  The most basic way to start an interactive job is the following:

                  $ qsub -I\nqsub: waiting for job 123456 to start\nqsub: job 123456 ready\n

                  There are two things of note here.

                  1. The \"qsub\" command (with the interactive -I flag) waits until a node is assigned to your interactive session, connects to the compute node and shows you the terminal prompt on that node.

                  2. You'll see that your directory structure of your home directory has remained the same. Your home directory is actually located on a shared storage system. This means that the exact same directory is available on all login nodes and all compute nodes on all clusters.

                  In order to know on which compute-node you're working, enter again:

                  $ hostname -f\nnode3501.doduo.gent.vsc\n

                  Note that we are now working on the compute-node called \"node3501.doduo.gent.vsc\". This is the compute node, which was assigned to us by the scheduler after issuing the \"qsub -I\" command.

                  Now, go to the directory of our second interactive example and run the program \"primes.py\". This program will ask you for an upper limit ($> 1$) and will print all the primes between 1 and your upper limit:

                  $ cd ~/examples/Running_interactive_jobs\n$ ./primes.py\nThis program calculates all primes between 1 and your upper limit.\nEnter your upper limit (>1): 50\nStart Time:  2013-09-11 15:49:06\n[Prime#1] = 1\n[Prime#2] = 2\n[Prime#3] = 3\n[Prime#4] = 5\n[Prime#5] = 7\n[Prime#6] = 11\n[Prime#7] = 13\n[Prime#8] = 17\n[Prime#9] = 19\n[Prime#10] = 23\n[Prime#11] = 29\n[Prime#12] = 31\n[Prime#13] = 37\n[Prime#14] = 41\n[Prime#15] = 43\n[Prime#16] = 47\nEnd Time:  2013-09-11 15:49:06\nDuration:  0 seconds.\n

                  You can exit the interactive session with:

                  $ exit\n

                  Note that you can now use this allocated node for 1 hour. After this hour you will be automatically disconnected. You can change this \"usage time\" by explicitly specifying a \"walltime\", i.e., the time that you want to work on this node. (Think of walltime as the time elapsed when watching the clock on the wall.)

                  You can work for 3 hours by:

                  qsub -I -l walltime=03:00:00\n

                  If the walltime of the job is exceeded, the (interactive) job will be killed and your connection to the compute node will be closed. So do make sure to provide adequate walltime and that you save your data before your (wall)time is up (exceeded)! When you do not specify a walltime, you get a default walltime of 1 hour.

                  "}, {"location": "running_interactive_jobs/#interactive-jobs-with-graphical-support", "title": "Interactive jobs, with graphical support", "text": ""}, {"location": "running_interactive_jobs/#software-installation", "title": "Software Installation", "text": "

                  To display graphical applications from a Linux computer (such as the VSC clusters) on your machine, you need to install an X Window server on your local computer.

                  The X Window system (commonly known as X11, based on its current major version being 11, or shortened to simply X) is the system-level software infrastructure for the windowing GUI on Linux, BSD and other UNIX-like operating systems. It was designed to handle both local displays, as well as displays sent across a network. More formally, it is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.

                  Download the latest version of the XQuartz package on: http://xquartz.macosforge.org/landing/ and install the XQuartz.pkg package.

                  The installer will take you through the installation procedure, just continue clicking Continue on the various screens that will pop-up until your installation was successful.

                  A reboot is required before XQuartz will correctly open graphical applications.

                  "}, {"location": "running_interactive_jobs/#run-simple-example", "title": "Run simple example", "text": "

                  We have developed a little interactive program that shows the communication in 2 directions. It will send information to your local screen, but also asks you to click a button.

                  Now run the message program:

                  cd ~/examples/Running_interactive_jobs\n./message.py\n

                  You should see the following message appearing.

                  Click any button and see what happens.

                  -----------------------\n< Enjoy the day! Mooh >\n-----------------------\n     ^__^\n     (oo)\\_______\n     (__)\\       )\\/\\\n         ||----w |\n         ||     ||\n
                  "}, {"location": "running_jobs_with_input_output_data/", "title": "Running jobs with input/output data", "text": "

                  You have now learned how to start a batch job and how to start an interactive session. The next question is how to deal with input and output files, where your standard output and error messages will go to and where that you can collect your results.

                  "}, {"location": "running_jobs_with_input_output_data/#the-current-directory-and-output-and-error-files", "title": "The current directory and output and error files", "text": ""}, {"location": "running_jobs_with_input_output_data/#default-file-names", "title": "Default file names", "text": "

                  First go to the directory:

                  cd ~/examples/Running_jobs_with_input_output_data\n

                  Note

                  If the example directory is not yet present, copy it to your home directory:

                  ```

                  cp -r /apps/gent/tutorials/Intro-HPC/examples ~/ ```

                  List and check the contents with:

                  $ ls -l\ntotal 2304\n-rwxrwxr-x 1 vsc40000   682 Sep 13 11:34 file1.py\n-rw-rw-r-- 1 vsc40000   212 Sep 13 11:54 file1a.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1b.pbs\n-rw-rw-r-- 1 vsc40000   994 Sep 13 11:53 file1c.pbs\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file2.py\n-rw-r--r-- 1 vsc40000  1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000  2393 Sep 13 10:40 file3.py\n

                  Now, let us inspect the contents of the first executable (which is just a Python script with execute permission).

                  file1.py
                  #!/usr/bin/env python\n#\n# VSC        : Flemish Supercomputing Centre\n# Tutorial   : Introduction to HPC\n# Description: Writing to the current directory, stdout and stderr\n#\nimport sys\n\n# Step #1: write to a local file in your current directory\nlocal_f = open(\"Hello.txt\", 'w+')\nlocal_f.write(\"Hello World!\\n\")\nlocal_f.write(\"I am writing in the file:<Hello.txt>.\\n\")\nlocal_f.write(\"in the current directory.\\n\")\nlocal_f.write(\"Cheers!\\n\")\nlocal_f.close()\n\n# Step #2: Write to stdout\nsys.stdout.write(\"Hello World!\\n\")\nsys.stdout.write(\"I am writing to <stdout>.\\n\")\nsys.stdout.write(\"Cheers!\\n\")\n\n# Step #3: Write to stderr\nsys.stderr.write(\"Hello World!\\n\")\nsys.stderr.write(\"This is NO ERROR or WARNING.\\n\")\nsys.stderr.write(\"I am just writing to <stderr>.\\n\")\nsys.stderr.write(\"Cheers!\\n\")\n

                  The code of the Python script, is self explanatory:

                  1. In step 1, we write something to the file hello.txt in the current directory.

                  2. In step 2, we write some text to stdout.

                  3. In step 3, we write to stderr.

                  Check the contents of the first job script:

                  file1a.pbs
                  #!/bin/bash\n\n#PBS -l walltime=00:05:00\n\n# go to the (current) working directory (optional, if this is the\n# directory where you submitted the job)\ncd $PBS_O_WORKDIR  # the program itself\necho Start Job\ndate\n./file1.py\necho End Job\n

                  You'll see that there are NO specific PBS directives for the placement of the output files. All output files are just written to the standard paths.

                  Submit it:

                  qsub file1a.pbs\n

                  After the job has finished, inspect the local directory again, i.e., the directory where you executed the qsub command:

                  $ ls -l\ntotal 3072\n-rw-rw-r-- 1 vsc40000   90 Sep 13 13:13 Hello.txt\n-rwxrwxr-x 1 vsc40000  693 Sep 13 13:03 file1.py*\n-rw-rw-r-- 1 vsc40000  229 Sep 13 13:01 file1a.pbs\n-rw------- 1 vsc40000   91 Sep 13 13:13 file1a.pbs.e123456\n-rw------- 1 vsc40000  105 Sep 13 13:13 file1a.pbs.o123456\n-rw-rw-r-- 1 vsc40000  143 Sep 13 13:07 file1b.pbs\n-rw-rw-r-- 1 vsc40000  177 Sep 13 13:06 file1c.pbs\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file2.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file2.py*\n-rw-r--r-- 1 vsc40000 1393 Sep 13 10:41 file3.pbs\n-rwxrwxr-x 1 vsc40000 2393 Sep 13 10:40 file3.py*\n

                  Some observations:

                  1. The file Hello.txt was created in the current directory.

                  2. The file file1a.pbs.o123456 contains all the text that was written to the standard output stream (\"stdout\").

                  3. The file file1a.pbs.e123456 contains all the text that was written to the standard error stream (\"stderr\").

                  Inspect their contents ...\u00a0and remove the files

                  $ cat Hello.txt\n$ cat file1a.pbs.o123456\n$ cat file1a.pbs.e123456\n$ rm Hello.txt file1a.pbs.o123456 file1a.pbs.e123456\n

                  Tip

                  Type cat H and press the Tab button (looks like Tab), and it will expand into cat Hello.txt.

                  "}, {"location": "running_jobs_with_input_output_data/#filenames-using-the-name-of-the-job", "title": "Filenames using the name of the job", "text": "

                  Check the contents of the job script and execute it.

                  file1b.pbs
                  #!/bin/bash\n\n#   Specify the \"name\" of the job\n#PBS -N my_serial_job         \n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n

                  Inspect the contents again ...\u00a0and remove the generated files:

                  $ ls\nHello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e123456\nfile1.py* file1b.pbs file2.py* file3.py* my_serial_job.o123456\n$ rm Hello.txt my_serial_job.*\n

                  Here, the option \"-N\" was used to explicitly assign a name to the job. This overwrote the JOBNAME variable, and resulted in a different name for the stdout and stderr files. This name is also shown in the second column of the \"qstat\" command. If no name is provided, it defaults to the name of the job script.

                  "}, {"location": "running_jobs_with_input_output_data/#user-defined-file-names", "title": "User-defined file names", "text": "

                  You can also specify the name of stdout and stderr files explicitly by adding two lines in the job script, as in our third example:

                  file1c.pbs
                  #!/bin/bash\n\n# redirect standard output (-o) and error (-e)\n#PBS -o stdout.$PBS_JOBID\n#PBS -e stderr.$PBS_JOBID\n\ncd $PBS_O_WORKDIR  echo Start Job\ndate\n./file1.py\necho End Job\n
                  "}, {"location": "running_jobs_with_input_output_data/#where-to-store-your-data-on-the-hpc", "title": "Where to store your data on the HPC", "text": "

                  The HPC cluster offers their users several locations to store their data. Most of the data will reside on the shared storage system, but all compute nodes also have their own (small) local disk.

                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-user-directories", "title": "Pre-defined user directories", "text": "

                  Three different pre-defined user directories are available, where each directory has been created for different purposes. The best place to store your data depends on the purpose, but also the size and type of usage of the data.

                  The following locations are available:

                  Variable Description Long-term storage slow filesystem, intended for smaller files $VSC_HOME For your configuration files and other small files, see the section on your home directory. The default directory is user/Gent/xxx/vsc40000. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. $VSC_DATA A bigger \"workspace\", for datasets, results, logfiles, etc. see the section on your data directory. The default directory is data/Gent/xxx/vsc40000. The same file system is accessible from all sites. Fast temporary storage $VSC_SCRATCH_NODE For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space. This space is available per node. The default directory is /tmp. On different nodes, you'll see different content. $VSC_SCRATCH For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is scratch/Gent/xxx/vsc40000. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. $VSC_SCRATCH_SITE Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See the section on your scratch space. $VSC_SCRATCH_GLOBAL Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See the section on your scratch space. $VSC_SCRATCH_CLUSTER The scratch filesystem closest to the cluster. $VSC_SCRATCH_ARCANINE A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.

                  Since these directories are not necessarily mounted on the same locations over all sites, you should always (try to) use the environment variables that have been created.

                  We elaborate more on the specific function of these locations in the following sections.

                  Note: $VSC_SCRATCH_KYUKON and $VSC_SCRATCH are the same directories (\"kyukon\" is the name of the storage cluster where the default shared scratch filesystem is hosted).

                  For documentation about VO directories, see the section on VO directories.

                  "}, {"location": "running_jobs_with_input_output_data/#your-home-directory-vsc_home", "title": "Your home directory ($VSC_HOME)", "text": "

                  Your home directory is where you arrive by default when you login to the cluster. Your shell refers to it as \"~\" (tilde), and its absolute path is also stored in the environment variable $VSC_HOME. Your home directory is shared across all clusters of the VSC.

                  The data stored here should be relatively small (e.g., no files or directories larger than a few megabytes), and preferably should only contain configuration files. Note that various kinds of configuration files are also stored here, e.g., by MATLAB, Eclipse, ...

                  The operating system also creates a few files and folders here to manage your account. Examples are:

                  File or Directory Description .ssh/ This directory contains some files necessary for you to login to the cluster and to submit jobs on the cluster. Do not remove them, and do not alter anything if you don't know what you are doing! .bash_profile When you login (type username and password) remotely via ssh, .bash_profile is executed to configure your shell before the initial command prompt. .bashrc This script is executed every time you start a session on the cluster: when you login to the cluster and when a job starts. .bash_history This file contains the commands you typed at your shell prompt, in case you need them again."}, {"location": "running_jobs_with_input_output_data/#your-data-directory-vsc_data", "title": "Your data directory ($VSC_DATA)", "text": "

                  In this directory you can store all other data that you need for longer terms (such as the results of previous jobs, ...). It is a good place for, e.g., storing big files like genome data.

                  The environment variable pointing to this directory is $VSC_DATA. This volume is shared across all clusters of the VSC. There are however no guarantees about the speed you will achieve on this volume. For guaranteed fast performance and very heavy I/O, you should use the scratch space instead.

                  If you are running out of quota on your _$VSC_DATA filesystem you can join an existing VO, or request a new VO. See the section about virtual organisations on how to do this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-scratch-space-vsc_scratch", "title": "Your scratch space ($VSC_SCRATCH)", "text": "

                  To enable quick writing from your job, a few extra file systems are available on the compute nodes. These extra file systems are called scratch folders, and can be used for storage of temporary and/or transient data (temporary results, anything you just need during your job, or your batch of jobs).

                  You should remove any data from these systems after your processing them has finished. There are no guarantees about the time your data will be stored on this system, and we plan to clean these automatically on a regular basis. The maximum allowed age of files on these scratch file systems depends on the type of scratch, and can be anywhere between a day and a few weeks. We don't guarantee that these policies remain forever, and may change them if this seems necessary for the healthy operation of the cluster.

                  Each type of scratch has its own use:

                  Node scratch ($VSC_SCRATCH_NODE). Every node has its own scratch space, which is completely separated from the other nodes. On some clusters, it will be on a local disk in the node, while on other clusters it will be emulated through another file server. Some drawbacks are that the storage can only be accessed on that particular node and that the capacity is often very limited (e.g., 100 GB). The performance will depend a lot on the particular implementation in the cluster. In many cases, it will be significantly slower than the cluster scratch as it typically consists of just a single disk. However, if that disk is local to the node (as on most clusters), the performance will not depend on what others are doing on the cluster.

                  Cluster scratch ($VSC_SCRATCH). To allow a job running on multiple nodes (or multiple jobs running on separate nodes) to share data as files, every node of the cluster (including the login nodes) has access to this shared scratch directory. Just like the home and data directories, every user has its own scratch directory. Because this scratch is also available from the login nodes, you could manually copy results to your data directory after your job has ended. Also, this type of scratch is usually implemented by running tens or hundreds of disks in parallel on a powerful file server with fast connection to all the cluster nodes and therefore is often the fastest file system available on a cluster. You may not get the same file system on different clusters, i.e., you may see different content on different clusters at the same institute.

                  Site scratch ($VSC_SCRATCH_SITE). At the time of writing, the site scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a different scratch file system that is available across all clusters at a particular site, which is in fact the case for the cluster scratch on some sites.

                  Global scratch ($VSC_SCRATCH_GLOBAL). At the time of writing, the global scratch is just the same volume as the cluster scratch, and thus contains the same data. In the future it may point to a scratch file system that is available across all clusters of the VSC, but at the moment of writing there are no plans to provide this.

                  "}, {"location": "running_jobs_with_input_output_data/#your-ugent-home-drive-and-shares", "title": "Your UGent home drive and shares", "text": "

                  In order to access data on your UGent share(s), you need to stage-in the data and stage-out afterwards. On the login nodes, it is possible to access your UGent home drive and shares. To allow this you need a ticket. This requires that you first authenticate yourself with your UGent username and password by running:

                  $ kinit yourugentusername@UGENT.BE\nPassword for yourugentusername@UGENT.BE:\n

                  Now you should be able to access your files running

                  $ ls /UGent/yourugentusername\nhome shares www\n

                  Please note the shares will only be mounted when you access this folder. You should specify your complete username - tab completion will not work.

                  If you want to use the UGent shares longer than 24 hours, you should ask a ticket for up to a week by running

                  kinit yourugentusername@UGENT.BE -r 7\n

                  You can verify your authentication ticket and expiry dates yourself by running klist

                  $ klist\n...\nValid starting     Expires            Service principal\n14/07/20 15:19:13  15/07/20 01:19:13  krbtgt/UGENT.BE@UGENT.BE\n    renew until 21/07/20 15:19:13\n

                  Your ticket is valid for 10 hours, but you can renew it before it expires.

                  To renew your tickets, simply run

                  kinit -R\n

                  If you want your ticket to be renewed automatically up to the maximum expiry date, you can run

                  krenew -b -K 60\n

                  Each hour the process will check if your ticket should be renewed.

                  We strongly advise to disable access to your shares once it is no longer needed:

                  kdestroy\n

                  If you get an error \"Unknown credential cache type while getting default ccache\" (or similar) and you use conda, then please deactivate conda before you use the commands in this chapter.

                  conda deactivate\n
                  "}, {"location": "running_jobs_with_input_output_data/#ugent-shares-with-globus", "title": "UGent shares with globus", "text": "

                  In order to access your UGent home and shares inside the globus endpoint, you first have to generate authentication credentials on the endpoint. To do that, you have to ssh to the globus endpoint from a loginnode. You will be prompted for your UGent username and password to authenticate:

                  $ ssh globus\nUGent username:ugentusername\nPassword for ugentusername@UGENT.BE:\nShares are available in globus endpoint at /UGent/ugentusername/\nOverview of valid tickets:\nTicket cache: KEYRING:persistent:xxxxxxx:xxxxxxx\nDefault principal: ugentusername@UGENT.BE\n\nValid starting     Expires            Service principal\n29/07/20 15:56:43  30/07/20 01:56:43  krbtgt/UGENT.BE@UGENT.BE\n    renew until 05/08/20 15:56:40\nTickets will be automatically renewed for 1 week\nConnection to globus01 closed.\n

                  Your shares will then be available at /UGent/ugentusername/ under the globus VSC tier2 endpoint. Tickets will be renewed automatically for 1 week, after which you'll need to run this again. We advise to disable access to your shares within globus once access is no longer needed:

                  $ ssh globus01 destroy\nSuccesfully destroyed session\n
                  "}, {"location": "running_jobs_with_input_output_data/#pre-defined-quotas", "title": "Pre-defined quotas", "text": "

                  Quota is enabled on these directories, which means that the amount of data you can store there is limited. This holds for both the total size of all files as well as the total number of files that can be stored. The system works with a soft quota and a hard quota. You can temporarily exceed the soft quota, but you can never exceed the hard quota. The user will get warnings as soon as he exceeds the soft quota.

                  To see your a list of your current quota, visit the VSC accountpage: https://account.vscentrum.be. VO moderators can see a list of VO quota usage per member of their VO via https://account.vscentrum.be/django/vo/.

                  The rules are:

                  1. You will only receive a warning when you have reached the soft limit of either quota.

                  2. You will start losing data and get I/O errors when you reach the hard limit. In this case, data loss will occur since nothing can be written anymore (this holds both for new files as well as for existing files), until you free up some space by removing some files. Also note that you will not be warned when data loss occurs, so keep an eye open for the general quota warnings!

                  3. The same holds for running jobs that need to write files: when you reach your hard quota, jobs will crash.

                  We do realise that quota are often observed as a nuisance by users, especially if you're running low on it. However, it is an essential feature of a shared infrastructure. Quota ensure that a single user cannot accidentally take a cluster down (and break other user's jobs) by filling up the available disk space. And they help to guarantee a fair use of all available resources for all users. Quota also help to ensure that each folder is used for its intended purpose.

                  "}, {"location": "running_jobs_with_input_output_data/#writing-output-files", "title": "Writing Output files", "text": "

                  Tip

                  Find the code of the exercises in \"~/examples/Running_jobs_with_input_output_data\"

                  In the next exercise, you will generate a file in the $VSC_SCRATCH directory. In order to generate some CPU- and disk-I/O load, we will

                  1. take a random integer between 1 and 2000 and calculate all primes up to that limit;

                  2. repeat this action 30.000 times;

                  3. write the output to the \"primes_1.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job: Remember that this is already a more serious (disk-I/O and computational intensive) job, which takes approximately 3 minutes on the HPC.

                  $ cat file2.py\n$ cat file2.pbs\n$ qsub file2.pbs\n$ qstat\n$ ls -l\n$ echo $VSC_SCRATCH\n$ ls -l $VSC_SCRATCH\n$ more $VSC_SCRATCH/primes_1.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#reading-input-files", "title": "Reading Input files", "text": "

                  Tip

                  Find the code of the exercise \"file3.py\" in \"~/examples/Running_jobs_with_input_output_data\".

                  In this exercise, you will

                  1. Generate the file \"primes_1.txt\" again as in the previous exercise;

                  2. open the file;

                  3. read it line by line;

                  4. calculate the average of primes in the line;

                  5. count the number of primes found per line;

                  6. write it to the \"primes_2.txt\" output file in the $VSC_SCRATCH-directory.

                  Check the Python and the PBS file, and submit the job:

                  $ cat file3.py\n$ cat file3.pbs\n$ qsub file3.pbs\n$ qstat\n$ ls -l\n$ more $VSC_SCRATCH/primes_2.txt\n
                  "}, {"location": "running_jobs_with_input_output_data/#how-much-disk-space-do-i-get", "title": "How much disk space do I get?", "text": ""}, {"location": "running_jobs_with_input_output_data/#quota", "title": "Quota", "text": "

                  The available disk space on the HPC is limited. The actual disk capacity, shared by all users, can be found on the \"Available hardware\" page on the website. (https://vscdocumentation.readthedocs.io/en/latest/hardware.html) As explained in the section on predefined quota, this implies that there are also limits to:

                  • the amount of disk space; and

                  • the number of files

                  that can be made available to each individual HPC user.

                  The quota of disk space and number of files for each HPC user is:

                  Volume Max. disk space Max. # Files HOME 3 GB 20000 DATA 25 GB 100000 SCRATCH 25 GB 100000

                  Tip

                  The first action to take when you have exceeded your quota is to clean up your directories. You could start by removing intermediate, temporary or log files. Keeping your environment clean will never do any harm.

                  Tip

                  If you obtained your VSC account via UGent, you can get (significantly) more storage quota in the DATA and SCRATCH volumes by joining a Virtual Organisation (VO), see the section on virtual organisations for more information. In case of questions, contact hpc@ugent.be.

                  "}, {"location": "running_jobs_with_input_output_data/#check-your-quota", "title": "Check your quota", "text": "

                  You can consult your current storage quota usage on the HPC-UGent infrastructure shared filesystems via the VSC accountpage, see the \"Usage\" section at https://account.vscentrum.be .

                  VO moderators can inspect storage quota for all VO members via https://account.vscentrum.be/django/vo/.

                  To check your storage usage on the local scratch filesystems on VSC sites other than UGent, you can use the \"show_quota\" command (when logged into the login nodes of that VSC site).

                  Once your quota is (nearly) exhausted, you will want to know which directories are responsible for the consumption of your disk space. You can check the size of all subdirectories in the current directory with the \"du\" (Disk Usage) command:

                  $ du\n256 ./ex01-matlab/log\n1536 ./ex01-matlab\n768 ./ex04-python\n512 ./ex02-python\n768 ./ex03-python\n5632\n

                  This shows you first the aggregated size of all subdirectories, and finally the total size of the current directory \".\" (this includes files stored in the current directory).

                  If you also want this size to be \"human-readable\" (and not always the total number of kilobytes), you add the parameter \"-h\":

                  $ du -h\n256K ./ex01-matlab/log\n1.5M ./ex01-matlab\n768K ./ex04-python\n512K ./ex02-python\n768K ./ex03-python\n5.5M .\n

                  If the number of lower level subdirectories starts to grow too big, you may not want to see the information at that depth; you could just ask for a summary of the current directory:

                  $ du -s\n5632 .\n$ du -s -h\n

                  If you want to see the size of any file or top-level subdirectory in the current directory, you could use the following command:

                  $ du -h --max-depth 1\n1.5M ./ex01-matlab\n512K ./ex02-python\n768K ./ex03-python\n768K ./ex04-python\n256K ./example.sh\n1.5M ./intro-HPC.pdf\n700M ./.cache\n

                  Finally, if you don't want to know the size of the data in your current directory, but in some other directory (e.g., your data directory), you just pass this directory as a parameter. The command below will show the disk use in your home directory, even if you are currently in a different directory:

                  $ du -h --max-depth 1 $VSC_HOME\n22M /user/home/gent/vsc400/vsc40000/dataset01\n36M /user/home/gent/vsc400/vsc40000/dataset02\n22M /user/home/gent/vsc400/vsc40000/dataset03\n3.5M /user/home/gent/vsc400/vsc40000/primes.txt\n24M /user/home/gent/vsc400/vsc40000/.cache\n
                  "}, {"location": "running_jobs_with_input_output_data/#groups", "title": "Groups", "text": "

                  Groups are a way to manage who can access what data. A user can belong to multiple groups at a time. Groups can be created and managed without any interaction from the system administrators.

                  Please note that changes are not instantaneous: it may take about an hour for the changes to propagate throughout the entire HPC infrastructure.

                  To change the group of a directory and it's underlying directories and files, you can use:

                  chgrp -R groupname directory\n
                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-group", "title": "Joining an existing group", "text": "
                  1. Get the group name you want to belong to.

                  2. Go to https://account.vscentrum.be/django/group/new and fill in the section named \"Join group\". You will be asked to fill in the group name and a message for the moderator of the group, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the group, who will either approve or deny the request. You will be a member of the group shortly after the group moderator approves your request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-group", "title": "Creating a new group", "text": "
                  1. Go to https://account.vscentrum.be/django/group/new and scroll down to the section \"Request new group\". This should look something like in the image below.

                  2. Fill out the group name. This cannot contain spaces.

                  3. Put a description of your group in the \"Info\" field.

                  4. You will now be a member and moderator of your newly created group.

                  "}, {"location": "running_jobs_with_input_output_data/#managing-a-group", "title": "Managing a group", "text": "

                  Group moderators can go to https://account.vscentrum.be/django/group/edit to manage their group (see the image below). Moderators can invite and remove members. They can also promote other members to moderator and remove other moderators.

                  "}, {"location": "running_jobs_with_input_output_data/#inspecting-groups", "title": "Inspecting groups", "text": "

                  You can get details about the current state of groups on the HPC infrastructure with the following command (example is the name of the group we want to inspect):

                  $ getent group example\nexample:*:1234567:vsc40001,vsc40002,vsc40003\n

                  We can see that the VSC id number is 1234567 and that there are three members in the group: vsc40001, vsc40002 and vsc40003.

                  "}, {"location": "running_jobs_with_input_output_data/#virtual-organisations", "title": "Virtual Organisations", "text": "

                  A Virtual Organisation (VO) is a special type of group. You can only be a member of one single VO at a time (or not be in a VO at all). Being in a VO allows for larger storage quota to be obtained (but these requests should be well-motivated).

                  "}, {"location": "running_jobs_with_input_output_data/#joining-an-existing-vo", "title": "Joining an existing VO", "text": "
                  1. Get the VO id of the research group you belong to (this id is formedby the letters gvo, followed by 5 digits).

                  2. Go to https://account.vscentrum.be/django/vo/join and fill in the section named \"Join VO\". You will be asked to fill in the VO id and a message for the moderator of the VO, where you identify yourself. This should look something like in the image below.

                  3. After clicking the submit button, a message will be sent to the moderator of the VO, who will either approve or deny the request.

                  "}, {"location": "running_jobs_with_input_output_data/#creating-a-new-vo", "title": "Creating a new VO", "text": "
                  1. Go to https://account.vscentrum.be/django/vo/new and scroll down to the section \"Request new VO\". This should look something like in the image below.

                  2. Fill why you want to request a VO.

                  3. Fill out the both the internal and public VO name. These cannot contain spaces, and should be 8-10 characters long. For example, genome25 is a valid VO name.

                  4. Fill out the rest of the form and press submit. This will send a message to the HPC administrators, who will then either approve or deny the request.

                  5. If the request is approved, you will now be a member and moderator of your newly created VO.

                  "}, {"location": "running_jobs_with_input_output_data/#requesting-more-storage-space", "title": "Requesting more storage space", "text": "

                  If you're a moderator of a VO, you can request additional quota for the VO and its members.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Request additional quota\". See the image below to see how this looks.

                  2. Fill out how much additional storage you want. In the screenshot below, we're asking for 500 GiB extra space for VSC_DATA, and for 1 TiB extra space on VSC_SCRATCH_KYUKON.

                  3. Add a comment explaining why you need additional storage space and submit the form.

                  4. An HPC administrator will review your request and approve or deny it.

                  "}, {"location": "running_jobs_with_input_output_data/#setting-per-member-vo-quota", "title": "Setting per-member VO quota", "text": "

                  VO moderators can tweak how much of the VO quota each member can use. By default, this is set to 50% for each user, but the moderator can change this: it is possible to give a particular user more than half of the VO quota (for example 80%), or significantly less (for example 10%).

                  Note that the total percentage can be above 100%: the percentages the moderator allocates per user are the maximum percentages of storage users can use.

                  1. Go to https://account.vscentrum.be/django/vo/edit and scroll down to \"Manage per-member quota share\". See the image below to see how this looks.

                  2. Fill out how much percent of the space you want each user to be able to use. Note that the total can be above 100%. In the screenshot below, there are four users. Alice and Bob can use up to 50% of the space, Carl can use up to 75% of the space, and Dave can only use 10% of the space. So in total, 185% of the space has been assigned, but of course only 100% can actually be used.

                  "}, {"location": "running_jobs_with_input_output_data/#vo-directories", "title": "VO directories", "text": "

                  When you're a member of a VO, there will be some additional directories on each of the shared filesystems available:

                  VO scratch ($VSC_SCRATCH_VO): A directory on the shared scratch filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_SCRATCH directory (see the section on your scratch space).

                  VO data ($VSC_DATA_VO): A directory on the shared data filesystem shared by the members of your VO, where additional storage quota can be provided (see the section on requesting more storage space). You can use this as an alternative to your personal $VSC_DATA directory (see the section on your data directory).

                  If you put _USER after each of these variable names, you can see your personal folder in these filesystems. For example: $VSC_DATA_VO_USER is your personal folder in your VO data filesystem (this is equivalent to $VSC_DATA_VO/$USER), and analogous for $VSC_SCRATCH_VO_USER.

                  "}, {"location": "setting_up_python_virtual_environments/", "title": "Python Virtual Environments (venv's)", "text": ""}, {"location": "setting_up_python_virtual_environments/#introduction", "title": "Introduction", "text": "

                  A Python virtual environment (\"venv\" for short) is a tool to create an isolated Python workspace. Within this isolated environment, you can install additional Python packages without affecting the system-wide Python installation. Because a normal user cannot install packages globally, using a virtual environment allows you to install packages locally without needing administrator privileges. This is especially useful when you need to use a package that is not available as a module on the HPC cluster.

                  "}, {"location": "setting_up_python_virtual_environments/#managing-python-environments", "title": "Managing Python Environments", "text": "

                  This section will explain how to create, activate, use and deactivate Python virtual environments.

                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-python-virtual-environment", "title": "Creating a Python virtual environment", "text": "

                  A Python virtual environment can be created with the following command:

                  python -m venv myenv      # Create a new virtual environment named 'myenv'\n

                  This command creates a new subdirectory named myenv in the current working directory. This directory will contain the packages, scripts, and binaries that are needed to manage the virtual environment.

                  Warning

                  When you create a virtual environment on top of a loaded Python module, the environment becomes specific to the cluster you're working on. This is because modules are built and optimized for the operating system and CPUs of the cluster. This means that you should create a new virtual environment for each cluster you work on. See Creating a virtual environment for a specific cluster for more information.

                  "}, {"location": "setting_up_python_virtual_environments/#activating-a-virtual-environment", "title": "Activating a virtual environment", "text": "

                  To use the virtual environment, you need to activate it. This will modify the shell environment to use the Python interpreter and packages from the virtual environment.

                  source myenv/bin/activate                    # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#installing-packages-in-a-virtual-environment", "title": "Installing packages in a virtual environment", "text": "

                  After activating the virtual environment, you can install additional Python packages with pip install:

                  pip install example_package1\npip install example_package2\n

                  These packages will be scoped to the virtual environment and will not affect the system-wide Python installation, and are only available when the virtual environment is activated. No administrator privileges are required to install packages in a virtual environment.

                  It is now possible to run Python scripts that use the installed packages in the virtual environment.

                  Tip

                  When creating a virtual environment, it's best to install only pure Python packages. Pure Python packages consist solely of Python code and don't require compilation. The installation method of these packages doesn't impact performance since they're not compiled.

                  Compiled libraries with a Python wrapper (non-pure Python packages) are better loaded as modules rather than installed in the virtual environment. This is because modules are optimized for the HPC cluster\u2019s specific hardware and operating system. If a non-pure Python package isn't available as a module, you can submit a software installation request.

                  To check if a package is available as a module, use:

                  module av package_name\n

                  Some Python packages are installed as extensions of modules. For example, numpy, scipy and pandas are part of the SciPy-bundle module. You can use

                  module show module_name\n

                  to check which extensions are included in a module (if any).

                  "}, {"location": "setting_up_python_virtual_environments/#using-a-virtual-environment", "title": "Using a virtual environment", "text": "

                  Once the environment is activated and packages are installed, you can run Python scripts that use the installed packages:

                  example.py
                  import example_package1\nimport example_package2\n...\n
                  python example.py\n
                  "}, {"location": "setting_up_python_virtual_environments/#deactivating-a-virtual-environment", "title": "Deactivating a virtual environment", "text": "

                  When you are done using the virtual environment, you can deactivate it. To do that, run:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#combining-virtual-environments-with-centrally-installed-modules", "title": "Combining virtual environments with centrally installed modules", "text": "

                  You can combine Python packages installed in a virtual environment with environment modules. The following script uses PyTorch (which is available as a module) and Poutyne (which we assume is not centrally installed):

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\n...\n

                  We load a PyTorch package as a module and install Poutyne in a virtual environment:

                  module load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\npip install Poutyne\n

                  While the virtual environment is activated, we can run the script without any issues:

                  python pytorch_poutyne.py\n

                  Deactivate the virtual environment when you are done:

                  deactivate\n
                  "}, {"location": "setting_up_python_virtual_environments/#creating-a-virtual-environment-for-a-specific-cluster", "title": "Creating a virtual environment for a specific cluster", "text": "

                  To create a virtual environment for a specific cluster, you need to start an interactive shell on that cluster. Let's say you want to create a virtual environment on the donphan cluster.

                  module swap cluster/donphan\nqsub -I\n

                  After some time, a shell will be started on the donphan cluster. You can now create a virtual environment as described in the first section. This virtual environment can be used by jobs running on the donphan cluster.

                  Naming a virtual environment

                  When naming a virtual environment, it is recommended to include the name of the cluster it was created for. We can use the $VSC_INSTITUTE_CLUSTER variable to get the name of the current cluster.

                  python -m venv myenv_${VSC_INSTITUTE_CLUSTER}\n
                  "}, {"location": "setting_up_python_virtual_environments/#example-python-job", "title": "Example Python job", "text": "

                  This section will combine the concepts discussed in the previous sections to:

                  1. Create a virtual environment on a specific cluster.
                  2. Combine packages installed in the virtual environment with modules.
                  3. Submit a job script that uses the virtual environment.

                  The example script that we will run is the following:

                  pytorch_poutyne.py
                  import torch\nimport poutyne\n\nprint(f\"The version of PyTorch is: {torch.__version__}\")\nprint(f\"The version of Poutyne is: {poutyne.__version__}\")\n

                  First, we create a virtual environment on the donphan cluster:

                  module swap cluster/donphan\nqsub -I\n# Load module dependencies\nmodule load PyTorch/2.1.2-foss-2023a\npython -m venv myenv\nsource myenv/bin/activate\n# install virtual environment dependencies\npip install Poutyne\ndeactivate\n

                  Type exit to exit the interactive shell. We now create a job script that loads the PyTorch module, enters the virtual environment and executes the script:

                  jobscript.pbs
                  #!/bin/bash\n\n# Basic parameters\n#PBS -N python_job_example            ## Job name\n#PBS -l nodes=1:ppn=1                 ## 1 node, 1 processors per node\n#PBS -l walltime=01:00:00             ## Max time your job will run (no more than 72:00:00)\n\nmodule load PyTorch/2.1.2-foss-2023a  # Load the PyTorch module\ncd $PBS_O_WORKDIR                     # Change working directory to the location where the job was submitted\nsource myenv/bin/activate             # Activate the virtual environment\n\npython pytorch_poutyne.py             # Run your Python script, or any other command within the virtual environment\n\ndeactivate                            # Deactivate the virtual environment\n

                  Next, we submit the job script:

                  qsub jobscript.pbs\n

                  Two files will be created in the directory where the job was submitted: python_job_example.o123456 and python_job_example.e{{ job_id }}, where 123456 is the id of your job. The .o file contains the output of the job.

                  "}, {"location": "setting_up_python_virtual_environments/#troubleshooting", "title": "Troubleshooting", "text": ""}, {"location": "setting_up_python_virtual_environments/#illegal-instruction-error", "title": "Illegal instruction error", "text": "

                  Activating a virtual environment created on a different cluster can cause issues. This happens because the binaries in the virtual environments from cluster A might not work with the CPU architecture of cluster B.

                  For example, if we create a virtual environment on the skitty cluster,

                  module swap cluster/skitty\nqsub -I\n$ python -m venv myenv\n

                  return to the login node by pressing CTRL+D and try to use the virtual environment:

                  $ source myenv/bin/activate\n$ python\nIllegal instruction (core dumped)\n

                  we are presented with the illegal instruction error. More info on this here

                  "}, {"location": "setting_up_python_virtual_environments/#error-glibc-not-found", "title": "Error: GLIBC not found", "text": "

                  When running a virtual environment across clusters with different major OS versions, you might encounter a variation of the following error:

                  python: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by python)\n

                  Make sure you do not activate a virtual environment created on a different cluster. For more information on how to create a virtual environment for a specific cluster, see Creating a virtual environment for a specific cluster. When following these steps, make sure you do not have any modules loaded when starting the interactive job.

                  "}, {"location": "setting_up_python_virtual_environments/#error-cannot-open-shared-object-file-no-such-file-or-directory", "title": "Error: cannot open shared object file: No such file or directory", "text": "

                  There are two main reasons why this error could occur.

                  1. You have not loaded the Python module that was used to create the virtual environment.
                  2. You loaded or unloaded modules while the virtual environment was activated.
                  "}, {"location": "setting_up_python_virtual_environments/#entering-a-virtual-environment-while-the-python-module-used-to-create-it-is-not-active", "title": "Entering a virtual environment while the Python module used to create it is not active", "text": "

                  If you loaded a Python module when creating a virtual environment, you need to make sure that the same module is loaded when you enter the environment. This is because the virtual environment keeps a reference to the base python used to create it.

                  The following commands illustrate this issue:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment with loaded python module\n$ module purge                              # unload all modules (WRONG!)\n$ source myenv/bin/activate                 # Activate the virtual environment\n$ python                                    # Start python\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  Here, the virtual environment tries to use the python module that was loaded when the environment was created. Since we used module purge, that module is no longer available. The solution is to load the same python module before activating the virtual environment:

                  module load Python/3.10.8-GCCcore-12.2.0  # Load the same python module\nsource myenv/bin/activate                 # Activate the virtual environment\n
                  "}, {"location": "setting_up_python_virtual_environments/#modifying-modules-while-in-a-virtual-environment", "title": "Modifying modules while in a virtual environment", "text": "

                  You must not load or unload modules while in a virtual environment. Loading and unloading modules modifies the $PATH variable in the current shell. When activating a virtual environment, it will store the $PATH variable of the shell at that moment. If you modify the $PATH variable while in a virtual environment by loading or unloading modules, and deactivate the virtual environment, the $PATH variable will be reset to the one stored in the virtual environment. Trying to use those modules will lead to errors:

                  $ module load Python/3.10.8-GCCcore-12.2.0  # Load a python module\n$ python -m venv myenv                      # Create a virtual environment\n$ source myenv/bin/activate                 # Activate the virtual environment (saves state of $PATH)\n$ module purge                              # Unload all modules (modifies the $PATH)\n$ deactivate                                # Deactivate the virtual environment (resets $PATH to saved state)\n$ python                                    # PATH contains a reference to the unloaded module\npython: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory\n

                  The solution is to only modify modules when not in a virtual environment.

                  "}, {"location": "singularity/", "title": "Singularity", "text": ""}, {"location": "singularity/#what-is-singularity", "title": "What is Singularity?", "text": "

                  Singularity is an open-source computer program that performs operating-system-level virtualization (also known as containerisation).

                  One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.

                  For more general information about the use of Singularity, please see the official documentation at https://www.sylabs.io/docs/.

                  This documentation only covers aspects of using Singularity on the infrastructure.

                  "}, {"location": "singularity/#restrictions-on-image-location", "title": "Restrictions on image location", "text": "

                  Some restrictions have been put in place on the use of Singularity. This is mainly done for performance reasons and to avoid that the use of Singularity impacts other users on the system.

                  The Singularity image file must be located on either one of the scratch filesystems, the local disk of the workernode you are using or /dev/shm. The centrally provided singularity command will refuse to run using images that are located elsewhere, in particular on the $VSC_HOME, /apps or $VSC_DATA filesystems.

                  In addition, this implies that running containers images provided via a URL (e.g., shub://... or docker://...) will not work.

                  If these limitations are a problem for you, please let us know via .

                  "}, {"location": "singularity/#available-filesystems", "title": "Available filesystems", "text": "

                  All HPC-UGent shared filesystems will be readily available in a Singularity container, including the home, data and scratch filesystems, and they will be accessible via the familiar $VSC_HOME, $VSC_DATA* and $VSC_SCRATCH* environment variables.

                  "}, {"location": "singularity/#singularity-images", "title": "Singularity Images", "text": ""}, {"location": "singularity/#creating-singularity-images", "title": "Creating Singularity images", "text": "

                  Creating new Singularity images or converting Docker images, by default, requires admin privileges, which is obviously not available on the infrastructure. However, if you use the --fakeroot option, you can make new Singularity images or convert Docker images.

                  When you make Singularity or convert Docker images you have some restrictions:

                  • Due to the nature of --fakeroot option, we recommend to write your singularity image to a globally writable location, like /tmp, or /local directories. Once the images is created, you should move it to your desired destination.
                  "}, {"location": "singularity/#converting-docker-images", "title": "Converting Docker images", "text": "

                  For more information on converting existing Docker images to Singularity images, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html.

                  We strongly recommend the use of Docker Hub, see https://hub.docker.com/ for more information.

                  "}, {"location": "singularity/#execute-our-own-script-within-our-container", "title": "Execute our own script within our container", "text": "

                  Copy testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH:

                  ::: prompt :::

                  Create a job script like:

                  Create an example myscript.sh:

                  ::: code bash #!/bin/bash

                  # prime factors factor 1234567 :::

                  "}, {"location": "singularity/#tensorflow-example", "title": "Tensorflow example", "text": "

                  We already have a Tensorflow example image, but you can also convert the Docker image (see https://hub.docker.com/r/tensorflow/tensorflow) to a Singularity image yourself

                  Copy testing image from /apps/gent/tutorials to $VSC_SCRATCH:

                  ::: prompt :::

                  You can download linear_regression.py from the official Tensorflow repository.

                  "}, {"location": "singularity/#mpi-example", "title": "MPI example", "text": "

                  It is also possible to execute MPI jobs within a container, but the following requirements apply:

                  • Mellanox IB libraries must be available from the container (install the infiniband-diags, libmlx5-1 and libmlx4-1 OS packages)

                  • Use modules within the container (install the environment-modules or lmod package in your container)

                  • Load the required module(s) before singularity execution.

                  • Set C_INCLUDE_PATH variable in your container if it is required during compilation time (export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu/:$C_INCLUDE_PATH for Debian flavours)

                  Copy the testing image from /apps/gent/tutorials/Singularity to $VSC_SCRATCH

                  ::: prompt :::

                  For example to compile an MPI example:

                  ::: prompt :::

                  Example MPI job script:

                  "}, {"location": "teaching_training/", "title": "Teaching and training", "text": "

                  The HPC infrastructure can be used for teaching and training purposes, and HPC-UGent provides support for getting you organized.

                  As a reminder, both Bachelor and Master students are allowed to use the HPC infrastructure, and it is also possible to organize trainings (or workshops). But in either case we do recommend preparing a fallback plan in case the HPC infrastructure becomes unavailable, e.g. because of an unexpected power failure.

                  In general, we advise the use of the HPC webportal in combination with the interactive cluster for teaching and training, but deviations are possible upon request.

                  In order to prepare things, make a teaching request by contacting the HPC-UGent team with the following information (explained further below):

                  • Title and nickname
                  • Start and end date for your course or training
                  • VSC-ids of all teachers/trainers
                  • Participants based on UGent Course Code and/or list of VSC-ids
                  • Optional information
                    • Additional storage requirements
                      • Shared folder
                      • Groups folder for collaboration
                      • Quota
                    • Reservation for resource requirements beyond the interactive cluster
                    • Ticket number for specific software needed for your course/training
                    • Details for a custom Interactive Application in the webportal

                  In addition, it could be beneficial to set up a short Teams call with HPC-UGent team members, especially if you are using a complex workflow for your course/workshop.

                  Please make these requests well in advance, several weeks before the start of your course/workshop.

                  "}, {"location": "teaching_training/#title-and-nickname", "title": "Title and nickname", "text": "

                  The title of the course or training can be used in e.g. reporting.

                  The nickname is a single (short) word or acronym that the students or participants can easily recognise, e.g. in the directory structure. In case of UGent courses, this is used next to the course code to help identify the course directory in the list of all courses one might follow.

                  When choosing the nickname, try to make it unique, but this is not enforced nor checked.

                  "}, {"location": "teaching_training/#start-and-end-date", "title": "Start and end date", "text": "

                  The start date (and time) is used as a target for the HPC-UGent team to set up your course requirements. But note that this target is best-effort, depending on the load of the support team and the complexity of your requirements. Requests should be made well in advance, at least several weeks before the actual start of your course. The sooner you make the request, the better.

                  The end date is used to automatically perform a cleanup when your course/workshop has finished, as described in the course data policy:

                  • Course group and subgroups will be deactivated
                  • Residual data in the course directories will be archived or deleted
                  • Custom Interactive Applications will be disabled
                  "}, {"location": "teaching_training/#teachers-and-trainers", "title": "Teachers and trainers", "text": "

                  A course group is created with all students or participants, and the teachers or trainers are the group moderators (and also member of this group).

                  This course group and the moderators group are used to manage the different privileges: moderators have additional privileges over non-moderator members e.g. they have read/write access in specific folders, can manage subgroups, ....

                  Provide us with a list of all the VSC-ids for the teachers or trainers to indentify the moderators.

                  "}, {"location": "teaching_training/#participants", "title": "Participants", "text": "

                  The management of the list of students or participants depends if this is a UGent course or a training/workshop.

                  "}, {"location": "teaching_training/#ugent-courses", "title": "UGent Courses", "text": "

                  Based on the Course Code, we can create VSC accounts for all UGent students that have officially enrolled in your UGent course (if they do not have an account already). Students will then no longer have to take steps themselves to request a VSC account. The students do need to be officially enrolled, so that they are linked to your UGent Course Code.

                  The created VSC accounts will be accounts without an ssh-key. This allows the students to use e.g. the portal, but if they require ssh access to the infrastructure, they will have to add an SSH key themselves.

                  Additionally, for external, non-UGent students the teaching request must contain the list of their VSC-ids, so they can be added to the course group.

                  A course group will be automatically created for your course, with all VSC accounts of registered students as member. Typical format gcourse_<coursecode>_<year>, e.g. gcourse_e071400_2023. Teachers are moderator of this course group, but will not be able to add unregistered students or moderators. VSC accounts that are not linked to the Course Code will be automatically removed from the course group. To get a student added to the course group, make sure that the student becomes officially enrolled in your course.

                  "}, {"location": "teaching_training/#trainings-and-workshops", "title": "Trainings and workshops", "text": "

                  (Currently under construction:) For trainings, workshops or courses that do not have a Course Code, you need to provide us with the list of all VSC-ids. A group will be made, based on the name of the workshop, with all VSC-ids as member. Teachers/trainers will be able to add/remove VSC accounts from this course group. But students will have to follow the procedure to request a VSC account themselves. There will be no automation.

                  "}, {"location": "teaching_training/#dedicated-storage", "title": "Dedicated storage", "text": "

                  For every course, a dedicated course directory will be created on the DATA filesystem under /data/gent/courses/<year>/<nickname>_<coursecode> (e.g. /data/gent/courses/2023/cae_e071400).

                  This directory will be accessible by all members of your course group. (Hence, it is no longer necessary to set up dangerous workarounds e.g. invite course members to your virtual organization.)

                  Every course directory will always contain the folders:

                  • input
                    • ideally suited to distribute input data such as common datasets
                    • moderators have read/write access
                    • group members (students) only have read access
                  • members
                    • this directory contains a personal folder for every student in your course members/vsc<01234>
                    • only this specific VSC-id will have read/write access to this folder
                    • moderators have read access to this folder
                  "}, {"location": "teaching_training/#shared-and-groups", "title": "Shared and groups", "text": "

                  Optionally, we can also create these folders:

                  • shared
                    • this is a folder for sharing files between any and all group members
                    • all group members and moderators have read/write access
                    • beware that group members will be able to alter/delete each others files in this folder if they set permissions in specific/non-default ways
                  • groups
                    • a number of groups/group_<01> folders are created under the groups folder
                    • these folders are suitable if you want to let your students collaborate closely in smaller groups
                    • each of these group_<01> folders are owned by a dedicated group
                    • teachers are automatically made moderators of these dedicated groups
                    • moderators can populate these groups with VSC-ids of group members in the VSC accountpage or ask the students to invite themselves via group edit. When students invite them self, moderators still need to approve the group invites.
                    • only these VSC-ids will then be able to access a group_<01> folder, and will have read/write access.

                  If you need any of these additional folders, do indicate under Optional storage requirements of your teaching request:

                  • shared: yes
                  • subgroups: <number of (sub)groups>
                  "}, {"location": "teaching_training/#course-quota", "title": "Course Quota", "text": "

                  There are 4 quota settings that you can choose in your teaching request in the case the defaults are not sufficient:

                  • overall quota (defaults 10 GB volume and 20k files) are for the moderators and can be used for e.g. the input folder.
                  • member quota (defaults 5 GB volume and 10k files) are per student/participant

                  The course data usage is not accounted for any other quota (like VO quota). It is solely dependent on these settings.

                  "}, {"location": "teaching_training/#course-data-policy", "title": "Course data policy", "text": "

                  The data policy for the dedicated course storage is the following: on the indicated end date of your course, the course directory will be made read-only to the moderators (possibly on the form of an archive zipfile). One year after the end date it will be permanently removed. We assume that teachers/trainers always have an own copy of the course data as a starting point for a next course.

                  "}, {"location": "teaching_training/#resource-requirements-beyond-the-interactive-cluster", "title": "Resource requirements beyond the interactive cluster", "text": "

                  We assume that your course requirements are such that the interactive cluster can be used. If these resources are insufficient, you will need to request and motivate a reservation.

                  Indicate which cluster you would need and the number of nodes, cores and/or GPUs. Also, clearly indicate when you would need these resources, i.e. the dates and times of each course session.

                  Be aware that students will have no access to the reservation outside the course sessions. This might be relevant when requesting a custom application.

                  Reservations take away precious resources for all HPC users, so only request this when it is really needed for your course. In our experience, the interactive cluster is more than sufficient for the majority of cases.

                  "}, {"location": "teaching_training/#specific-software", "title": "Specific software", "text": "

                  In case you need software for your course/workshop that is unavailable or that needs to be updated, make a separate software installation request. Add the OTRS ticket number in your teaching request.

                  We will try to make the software available before the start of your course/workshop. But this is always best effort, depending on the load of the support team and the complexity of your software request. Typically, software installation requests must be made at least one month before the course/workshop starts.

                  Ideally, courses/workshops rely on software that is already in use (and thus also well tested).

                  "}, {"location": "teaching_training/#custom-interactive-application-in-the-webportal", "title": "Custom Interactive Application in the webportal", "text": "

                  HPC-UGent can create a custom interactive application in the web portal for your course/workshop. Typically, this is a generic interactive application such as cluster desktop, Jupyter notebook, ... in which a number of options are preset or locked down: e.g. the number of cores, software version, cluster selection, autostart code, etc. This could make it easier for teachers and students, since students are less prone to making mistakes and do not have to spend time copy-pasting specific settings.

                  A custom interactive application will only be available to the members of your course group. It will appear in the Interactive Apps menu in the webportal, under the section Courses. After the indicated end date of your course, this application will be removed.

                  If you would like this for your course, provide more details in your teaching request, including:

                  • what interactive application would you like to get launched (cluster desktop, Jupyter Notebook, ...)

                  • which cluster you want to use

                  • how many nodes/cores/GPUs are needed

                  • which software modules you are loading

                  • custom code you are launching (e.g. autostart a GUI)

                  • required environment variables that you are setting

                  • ...

                  We will try to make the custom interactive application available before the start of your course/workshop, but this is always best effort, depending on the load of the support team and the complexity of your request.

                  A caveat for the teacher and students is that students do not learn to work with the generic application, and do not see the actual commands or customization code. Therefore, per custom interactive application, HPC-UGent will make a dedicated section in the web portal chapter of the HPC user documentation. This section will briefly explain what happens under the hood of the interactive application. We would recommend that you as a teacher take some time to show and explain this to the students. Note that the custom interactive application will disappear for students after the indicated end of your course, but the section in the web portal will remain there for several years, for reference.

                  "}, {"location": "torque_frontend_via_jobcli/", "title": "Torque frontend via jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#what-is-torque", "title": "What is Torque", "text": "

                  Torque is a resource manager for submitting and managing jobs on an HPC cluster. It is an implementation of PBS (Portable Batch System). Torque is not widely used anymore, so the HPC-UGent infrastructure no longer uses Torque in the backend since 2021 in favor of Slurm. The Torque user interface, which consists of commands like qsub and qstat, was kept however, to avoid that researchers had to learn other commands to submit and manage jobs.

                  "}, {"location": "torque_frontend_via_jobcli/#slurm-backend", "title": "Slurm backend", "text": "

                  Slurm is a resource manager for submitting and managing jobs on an HPC cluster, similar to Torque (but more advanced/modern in some ways). Currently, Slurm is the most popular workload manager on HPC systems worldwide, but it has a user interface that is different and in some sense less user friendly than Torque/PBS.

                  "}, {"location": "torque_frontend_via_jobcli/#jobcli", "title": "jobcli", "text": "

                  Jobcli is a Python library that was developed by HPC-UGent team to make it possible for the HPC-UGent infrastructure to use a Torque frontend and a Slurm backend. In addition to that, it adds some additional options for Torque commands. Put simply, jobcli can be thought of as a Python script that \"translates\" Torque commands into equivalent Slurm commands, and in the case of qsub also makes some changes to the provided job script to make it compatible with Slurm.

                  "}, {"location": "torque_frontend_via_jobcli/#additional-options-for-torque-commands-supported-by-jobcli", "title": "Additional options for Torque commands supported by jobcli", "text": ""}, {"location": "torque_frontend_via_jobcli/#help-option", "title": "help option", "text": "

                  Adding --help to a Torque command when using it on the HPC-UGent infrastructure will output an extensive overview of all supported options for that command, including all possible options for that command (including the original ones from Torque and the ones added by jobcli) and a short description for each one.

                  For example:

                  $ qsub --help\nusage: qsub [--version] [--debug] [--dryrun] [--pass OPTIONS] [--dump PATH]...\n\nSubmit job script\n\npositional arguments:\n  script_file_path      Path to job script to be submitted (default: read job\n                        script from stdin)\n\noptional arguments:\n  -A ACCOUNT            Charge resources used by this job to specified account\n  ...\n

                  "}, {"location": "torque_frontend_via_jobcli/#dryrun-option", "title": "dryrun option", "text": "

                  Adding --dryrun to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. Using --dryrun will not actually execute the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#debug-option", "title": "debug option", "text": "

                  Similarly to --dryrun, adding --debug to a Torque command when using it on the HPC-UGent infrastructure will show the user what Slurm commands are generated by that Torque command by jobcli. However in contrast to --dryrun, using --debug will actually run the Slurm backend command.

                  See also the examples below.

                  "}, {"location": "torque_frontend_via_jobcli/#examples", "title": "Examples", "text": "

                  The following examples illustrate the working of the --dryrun and --debug options with an example jobscript.

                  example.sh:

                  #/bin/bash\n#PBS -l nodes=1:ppn=8\n#PBS -l walltime=2:30:00\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-dryrun-option", "title": "Example of the dryrun option", "text": "

                  Running the following command:

                  $ qsub --dryrun example.sh -N example\n

                  will generate this output:

                  Command that would have been run:\n---------------------------------\n\n/usr/bin/sbatch\n\nJob script that would have been submitted:\n------------------------------------------\n\n#!/bin/bash\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\n\n### (start of lines that were added automatically by jobcli)\n#\n# original submission command:\n# qsub --dryrun example.sh -N example\n#\n# directory where submission command was executed:\n# /kyukon/home/gent/400/vsc40000/examples\n#\n# original script header:\n# #PBS -l nodes=1:ppn=8\n# #PBS -l walltime=2:30:00\n#\n### (end of lines that were added automatically by jobcli)\n\n#/bin/bash\n\nmodule load SciPy-bundle/2023.11-gfbf-2023b\n\npython script.py > script.out.${PBS_JOBID}\n
                  This output consist of a few components. For our example the most important lines are the ones that start with #SBATCH since these contain the translation of the Torque commands to Slurm commands. For example the job-name is the one we specified with the -N option in the command.

                  With this dryrun, you can see that the only changes were made to the header, the job script itself is not changed at all. If the job script were to use any PBS-related structures, like $PBS_JOBID, they are retained. Slurm is configured such on the HPC-UGent infrastructure that common PBS_* environment variables are defined in the job environment, next to the Slurm equivalents.

                  "}, {"location": "torque_frontend_via_jobcli/#example-of-the-debug-option", "title": "Example of the debug option", "text": "

                  Similarly to the --dryrun example, we start by running the following command:

                  $ qsub --debug example.sh -N example\n

                  which generates this output:

                  DEBUG: Submitting job script location at example.sh\nDEBUG: Generated script header\n#SBATCH --chdir=\"/user/gent/400/vsc40000\"\n#SBATCH --error=\"/kyukon/home/gent/400/vsc40000/examples/%x.e%A\"\n#SBATCH --export=\"NONE\"\n#SBATCH --get-user-env=\"60L\"\n#SBATCH --job-name=\"example\"\n#SBATCH --mail-type=\"NONE\"\n#SBATCH --nodes=\"1\"\n#SBATCH --ntasks-per-node=\"8\"\n#SBATCH --ntasks=\"8\"\n#SBATCH --output=\"/kyukon/home/gent/400/vsc40000/examples/%x.o%A\"\n#SBATCH --time=\"02:30:00\"\nDEBUG: HOOKS: Looking for hooks in directory '/etc/jobcli/hooks'\nDEBUG: HOOKS: Directory '/etc/jobcli/hooks' does not exist, so no hooks there\nDEBUG: Running command '/usr/bin/sbatch'\n64842138\n
                  The output once again consists of the translated Slurm commands with some additional debug information and a job id for the job that was submitted.

                  "}, {"location": "torque_options/", "title": "TORQUE options", "text": ""}, {"location": "torque_options/#torque-submission-flags-common-and-useful-directives", "title": "TORQUE Submission Flags: common and useful directives", "text": "

                  Below is a list of the most common and useful directives.

                  Option System type Description -k All Send \"stdout\" and/or \"stderr\" to your home directory when the job runs #PBS -k o or #PBS -k e or #PBS -koe -l All Precedes a resource request, e.g., processors, wallclock -M All Send an e-mail messages to an alternative e-mail address #PBS -M me@mymail.be -m All Send an e-mail address when a job begins execution and/or ends or aborts #PBS -m b or #PBS -m be or #PBS -m ba mem Shared Memory Memory & Specifies the amount of memory you need for a job. #PBS -I mem=90gb mpiproces Clusters Number of processes per node on a cluster. This should equal number of processors on a node in most cases. #PBS -l mpiprocs=4 -N All Give your job a unique name #PBS -N galaxies1234 -ncpus Shared Memory The number of processors to use for a shared memory job. #PBS ncpus=4 -r All ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen. #PBS -r n#PBS -r y select Clusters Number of compute nodes to use. Usually combined with the mpiprocs directive #PBS -l select=2 -V All Make sure that the environment in which the job runs is the same as the environment in which it was submitted #PBS -V Walltime All The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS #PBS -l walltime=12:00:00"}, {"location": "torque_options/#environment-variables-in-batch-job-scripts", "title": "Environment Variables in Batch Job Scripts", "text": "

                  TORQUE-related environment variables in batch job scripts.

                  # Using PBS - Environment Variables:\n# When a batch job starts execution, a number of environment variables are\n# predefined, which include:\n#\n#      Variables defined on the execution host.\n#      Variables exported from the submission host with\n#                -v (selected variables) and -V (all variables).\n#      Variables defined by PBS.\n#\n# The following reflect the environment where the user ran qsub:\n# PBS_O_HOST    The host where you ran the qsub command.\n# PBS_O_LOGNAME Your user ID where you ran qsub.\n# PBS_O_HOME    Your home directory where you ran qsub.\n# PBS_O_WORKDIR The working directory where you ran qsub.\n#\n# These reflect the environment where the job is executing:\n# PBS_ENVIRONMENT       Set to PBS_BATCH to indicate the job is a batch job,\n#         or to PBS_INTERACTIVE to indicate the job is a PBS interactive job.\n# PBS_O_QUEUE   The original queue you submitted to.\n# PBS_QUEUE     The queue the job is executing from.\n# PBS_JOBID     The job's PBS identifier.\n# PBS_JOBNAME   The job's name.\n

                  IMPORTANT!! All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

                  When a batch job is started, a number of environment variables are created that can be used in the batch job script. A few of the most commonly used variables are described here.

                  Variable Description PBS_ENVIRONMENT set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. PBS_JOBID the job identifier assigned to the job by the batch system. This is the same number you see when you do qstat. PBS_JOBNAME the job name supplied by the user PBS_NODEFILE the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. PBS_QUEUE the name of the queue from which the job is executed PBS_O_HOME value of the HOME variable in the environment in which qsub was executed PBS_O_LANG value of the LANG variable in the environment in which qsub was executed PBS_O_LOGNAME value of the LOGNAME variable in the environment in which qsub was executed PBS_O_PATH value of the PATH variable in the environment in which qsub was executed PBS_O_MAIL value of the MAIL variable in the environment in which qsub was executed PBS_O_SHELL value of the SHELL variable in the environment in which qsub was executed PBS_O_TZ value of the TZ variable in the environment in which qsub was executed PBS_O_HOST the name of the host upon which the qsub command is running PBS_O_QUEUE the name of the original queue to which the job was submitted PBS_O_WORKDIR the absolute path of the current working directory of the qsub command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. PBS_VERSION Version Number of TORQUE, e.g., TORQUE-2.5.1 PBS_MOMPORT active port for mom daemon PBS_TASKNUM number of tasks requested PBS_JOBCOOKIE job cookie PBS_SERVER Server Running TORQUE"}, {"location": "troubleshooting/", "title": "Troubleshooting", "text": ""}, {"location": "troubleshooting/#job_does_not_run_faster", "title": "Why does my job not run faster when using more nodes and/or cores?", "text": "

                  Requesting more resources for your job, more specifically using multiple cores and/or nodes, does not automatically imply that your job will run faster. There are various factors that determine to what extent these extra resources can be used and how efficiently they can be used. More information on this in the subsections below.

                  "}, {"location": "troubleshooting/#using-multiple-cores", "title": "Using multiple cores", "text": "

                  When you want to speed up your jobs by requesting multiple cores, you also need to use software that is actually capable of using them (and use them efficiently, ideally). Unless a particular parallel programming paradigm like OpenMP threading (shared memory) or MPI (distributed memory) is used, software will run sequentially (on a single core).

                  To use multiple cores, the software needs to be able to create, manage, and synchronize multiple threads or processes. More on how to implement parallelization for you exact programming language can be found online. Note that when using software that only uses threads to use multiple cores, there is no point in asking for multiple nodes, since with a multi-threading (shared memory) approach you can only use the resources (cores, memory) of a single node.

                  Even if your software is able to use multiple cores, maybe there is no point in going beyond a single core or a handful of cores, for example because the workload you are running is too small or does not parallelize well. You can test this by increasing the amount of cores step-wise, and look at the speedup you gain. For example, test with 2, 4, 16, a quarter of, half of, and all available cores.

                  Other reasons why using more cores may not lead to a (significant) speedup include:

                  • Overhead: When you use multi-threading (OpenMP) or multi-processing (MPI), you should not expect that doubling the amount of cores will result in a 2x speedup. This is due to the fact that time is needed to create, manage and synchronize the threads/processes. When this \"bookkeeping\" overhead exceeds the time gained by parallelization, you will not observe any speedup (or even see slower runs). For example, this can happen when you split your program in too many (tiny) tasks to run in parallel - creating a thread/process for each task may even take longer than actually running the task itself.

                  • Amdahl's Law is often used in parallel computing to predict the maximum achievable (theoretical) speedup when using multiple cores. It states that \"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used\". For example, if a program needs 20 hours to complete using a single core, but a one-hour portion of the program can not be parallelized, only the remaining 19 hours of execution time can be sped up using parallelization. Regardless of how many cores are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. So when you reach this theoretical limit, using more cores will not help at all to speed up the computational workload.

                  • Resource contention: When two or more threads/processes want to access the same resource, they need to wait on each other - this is called resource contention. As a result, 1 thread/process will need to wait until the other one is finished using that resource. When each thread uses the same resource, it will definitely run slower than if it doesn't need to wait for other threads to finish.

                  • Software limitations: It is possible that the software you are using is just not really optimized for parallelization. An example of a software that is not really optimized for multi-threading is Python (although this has improved over the years). This is due to the fact that in Python threads are implemented in a way that multiple threads can not run at the same time, due to the global interpreter lock (GIL). Instead of using multi-threading in Python to speedup a CPU bound program, you should use multi-processing instead, which uses multiple processes (multiple instances of the same program) instead of multiple threads in a single program instance. Using multiple processes can speed up your CPU bound programs a lot more in Python than threads can do, even though they are much less efficient to create. In other programming languages (which don't have a GIL), you would probably still want to use threads.

                  • Affinity and core pinning: Even when the software you are using is able to efficiently use multiple cores, you may not see any speedup (or even a significant slowdown). This could be due to threads or processes that are not pinned to specific cores and keep hopping around between cores, or because the pinning is done incorrectly and several threads/processes are being pinned to the same core(s), and thus keep \"fighting\" each other.

                  • Lack of sufficient memory: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).

                  More info on running multi-core workloads on the HPC-UGent infrastructure can be found here.

                  "}, {"location": "troubleshooting/#using-multiple-nodes", "title": "Using multiple nodes", "text": "

                  When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.

                  Parallelizing code across nodes is fundamentally different from leveraging multiple cores via multi-threading within a single node. The scalability achieved through multi-threading does not extend seamlessly to distributing computations across multiple nodes. This means that just changing #PBS -l nodes=1:ppn=10 to #PBS -l nodes=2:ppn=10 may only increase the waiting time to get your job running (because twice as many resources are requested), and will not improve the execution time.

                  Actually using additional nodes is not as straightforward as merely asking for multiple nodes when submitting your job. The resources on these additional nodes often need to discovered, managed, and synchronized. This introduces complexities in distributing work effectively across the nodes. Luckily, there exist some libraries that do this for you.

                  Using the resources of multiple nodes is often done using a Message Passing Interface (MPI) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.

                  An example of how you can make beneficial use of multiple nodes can be found here.

                  You can also use MPI in Python, some useful packages that are also available on the HPC are:

                  • mpi4py
                  • Boost.MPI

                  We advise to maximize core utilization before considering using multiple nodes. Our infrastructure has clusters with a lot of cores per node so we suggest that you first try to use all the cores on 1 node before you expand to more nodes. In addition, when running MPI software we strongly advise to use our mympirun tool.

                  "}, {"location": "troubleshooting/#how-do-i-know-if-my-software-can-run-in-parallel", "title": "How do I know if my software can run in parallel?", "text": "

                  If you are not sure if the software you are using can efficiently use multiple cores or run across multiple nodes, you should check its documentation for instructions on how to run in parallel, or check for options that control how many threads/cores/nodes can be used.

                  If you can not find any information along those lines, the software you are using can probably only use a single core and thus requesting multiple cores and/or nodes will only result in wasted resources.

                  "}, {"location": "troubleshooting/#walltime-issues", "title": "Walltime issues", "text": "

                  If you get from your job output an error message similar to this:

                  =>> PBS: job killed: walltime <value in seconds> exceeded limit  <value in seconds>\n

                  This occurs when your job did not complete within the requested walltime. See section\u00a0on Specifying Walltime for more information about how to request the walltime.

                  "}, {"location": "troubleshooting/#out-of-quota-issues", "title": "Out of quota issues", "text": "

                  Sometimes a job hangs at some point or it stops writing in the disk. These errors are usually related to the quota usage. You may have reached your quota limit at some storage endpoint. You should move (or remove) the data to a different storage endpoint (or request more quota) to be able to write to the disk and then resubmit the jobs.

                  Another option is to request extra quota for your VO to the VO moderator/s. See section on Pre-defined user directories and Pre-defined quotas for more information about quotas and how to use the storage endpoints in an efficient way.

                  "}, {"location": "troubleshooting/#sec:connecting-issues", "title": "Issues connecting to login node", "text": "

                  If you are confused about the SSH public/private key pair concept, maybe the key/lock analogy in How do SSH keys work? can help.

                  If you have errors that look like:

                  vsc40000@login.hpc.ugent.be: Permission denied\n

                  or you are experiencing problems with connecting, here is a list of things to do that should help:

                  1. Keep in mind that it can take up to an hour for your VSC account to become active after it has been approved; until then, logging in to your VSC account will not work.

                  2. Make sure you are connecting from an IP address that is allowed to access the VSC login nodes, see section Connection restrictions for more information.

                  3. Please double/triple check your VSC login ID. It should look something like vsc40000: the letters vsc, followed by exactly 5 digits. Make sure it's the same one as the one on https://account.vscentrum.be/.

                  4. You previously connected to the HPC from another machine, but now have another machine? Please follow the procedure for adding additional keys in section Adding multiple SSH public keys. You may need to wait for 15-20 minutes until the SSH public key(s) you added become active.

                  5. When using an SSH key in a non-default location, make sure you supply the path of the private key (and not the path of the public key) to ssh. id_rsa.pub is the usual filename of the public key, id_rsa is the usual filename of the private key. (See also section\u00a0Connect)

                  6. If you have multiple private keys on your machine, please make sure you are using the one that corresponds to (one of) the public key(s) you added on https://account.vscentrum.be/.

                  7. Please do not use someone else's private keys. You must never share your private key, they're called private for a good reason.

                  If you've tried all applicable items above and it doesn't solve your problem, please contact hpc@ugent.be and include the following information:

                  Please add -vvv as a flag to ssh like:

                  ssh -vvv vsc40000@login.hpc.ugent.be\n

                  and include the output of that command in the message.

                  "}, {"location": "troubleshooting/#security-warning-about-invalid-host-key", "title": "Security warning about invalid host key", "text": "

                  If you get a warning that looks like the one below, it is possible that someone is trying to intercept the connection between you and the system you are connecting to. Another possibility is that the host key of the system you are connecting to has changed.

                  You will need to verify that the fingerprint shown in the dialog matches one of the following fingerprints:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  Do not click \"Yes\" until you verified the fingerprint. Do not press \"No\" in any case.

                  If the fingerprint matches, click \"Yes\".

                  If it doesn't (like in the example) or you are in doubt, take a screenshot, press \"Cancel\" and contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go client, you might get one of the following fingerprints:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

                  If you get a message \"Host key for server changed\", do not click \"No\" until you verified the fingerprint.

                  If the fingerprint matches, click \"No\", and in the next pop-up screen (\"if you accept the new host key...\"), press \"Yes\".

                  If it doesn't, or you are in doubt, take a screenshot, press \"Yes\" and contact hpc@ugent.be.

                  "}, {"location": "troubleshooting/#doswindows-text-format", "title": "DOS/Windows text format", "text": "

                  If you get errors like:

                  $ qsub fibo.pbs\nqsub: script is written in DOS/Windows text format\n

                  or

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\n

                  It's probably because you transferred the files from a Windows computer. See the section about dos2unix in Linux tutorial to fix this error.

                  "}, {"location": "troubleshooting/#warning-message-when-first-connecting-to-new-host", "title": "Warning message when first connecting to new host", "text": "

                  If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c

                  If it does, type yes. If it doesn't, please contact support: hpc@ugent.be.

                  The first time you make a connection to the login node, a Security Alert will appear and you will be asked to verify the authenticity of the login node.

                  Make sure the fingerprint in the alert matches one of the following:

                  - ssh-rsa 2048 10:2f:31:21:04:75:cb:ed:67:e0:d5:0c:a1:5a:f4:78\n- ssh-rsa 2048 SHA256:W8Wz0/FkkCR2ulN7+w8tNI9M0viRgFr2YlHrhKD2Dd0\n- ssh-ed25519 255 19:28:76:94:52:9d:ff:7d:fb:8b:27:b6:d7:69:42:eb\n- ssh-ed25519 256 SHA256:8AJg3lPN27y6i+um7rFx3xoy42U8ZgqNe4LsEycHILA\n- ssh-ecdsa 256 e6:d2:9c:d8:e7:59:45:03:4a:1f:dc:96:62:29:9c:5f\n- ssh-ecdsa 256 SHA256:C8TVx0w8UjGgCQfCmEUaOPxJGNMqv2PXLyBNODe5eOQ\n

                  If it does, press Yes, if it doesn't, please contact hpc@ugent.be.

                  Note: it is possible that the ssh-ed25519 fingerprint starts with ssh-ed25519 255 rather than ssh-ed25519 256 (or vice versa), depending on the PuTTY version you are using. It is safe to ignore this 255 versus 256 difference, but the part after should be identical.

                  If you use X2Go, then you might get another fingerprint, then make sure that the fingerprint is displayed is one of the following ones:

                  • ssh-rsa 2048 53:25:8c:1e:72:8b:ce:87:3e:54:12:44:a7:13:1a:89:e4:15:b6:8e
                  • ssh-ed25519 255 e3:cc:07:64:78:80:28:ec:b8:a8:8f:49:44:d1:1e:dc:cc:0b:c5:6b
                  • ssh-ecdsa 256 67:6c:af:23:cc:a1:72:09:f5:45:f1:60:08:e8:98:ca:31:87:58:6c
                  "}, {"location": "troubleshooting/#memory-limits", "title": "Memory limits", "text": "

                  To avoid jobs allocating too much memory, there are memory limits in place by default. It is possible to specify higher memory limits if your jobs require this.

                  Note

                  Memory is not the same as storage. Memory or RAM is used for temporary, fast access to data when the program is running, while storage is used for long-term data retention. If you are running into problems because you reached your storage quota, see Out of quota issues.

                  "}, {"location": "troubleshooting/#how-will-i-know-if-memory-limits-are-the-cause-of-my-problem", "title": "How will I know if memory limits are the cause of my problem?", "text": "

                  If your program fails with a memory-related issue, there is a good chance it failed because of the memory limits and you should increase the memory limits for your job.

                  Examples of these error messages are: malloc failed, Out of memory, Could not allocate memory or in Java: Could not reserve enough space for object heap. Your program can also run into a Segmentation fault (or segfault) or crash due to bus errors.

                  You can check the amount of virtual memory (in Kb) that is available to you via the ulimit -v command in your job script.

                  "}, {"location": "troubleshooting/#how-do-i-specify-the-amount-of-memory-i-need", "title": "How do I specify the amount of memory I need?", "text": "

                  See Generic resource requirements to set memory and other requirements, see Specifying memory requirements to finetune the amount of memory you request.

                  "}, {"location": "troubleshooting/#module-conflicts", "title": "Module conflicts", "text": "

                  Modules that are loaded together must use the same toolchain version or common dependencies. In the following example, we try to load a module that uses the intel-2018a toolchain together with one that uses the intel-2017a toolchain:

                  $ module load Python/2.7.14-intel-2018a\n$ module load  HMMER/3.1b2-intel-2017a\nLmod has detected the following error: A different version of the 'intel' module is already loaded (see output of 'ml'). \nYou should load another 'HMMER' module for that is compatible with the currently loaded version of 'intel'. \nUse 'ml avail HMMER' to get an overview of the available versions.\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be \nWhile processing the following module(s):\n\n    Module fullname          Module Filename\n    ---------------          ---------------\n    HMMER/3.1b2-intel-2017a  /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua\n

                  This resulted in an error because we tried to load two modules with different versions of the intel toolchain.

                  To fix this, check if there are other versions of the modules you want to load that have the same version of common dependencies. You can list all versions of a module with module avail: for HMMER, this command is module avail HMMER.

                  As a rule of thumb, toolchains in the same row are compatible with each other:

                  GCCcore-13.2.0 GCC-13.2.0 gfbf-2023b/gompi-2023b foss-2023b GCCcore-13.2.0 intel-compilers-2023.2.1 iimkl-2023b/iimpi-2023b intel-2023b GCCcore-12.3.0 GCC-12.3.0 gfbf-2023a/gompi-2023a foss-2023a GCCcore-12.3.0 intel-compilers-2023.1.0 iimkl-2023a/iimpi-2023a intel-2023a GCCcore-12.2.0 GCC-12.2.0 gfbf-2022b/gompi-2022b foss-2022b GCCcore-12.2.0 intel-compilers-2022.2.1 iimkl-2022b/iimpi-2022b intel-2022b GCCcore-11.3.0 GCC-11.3.0 gfbf-2022a/gompi-2022a foss-2022a GCCcore-11.3.0 intel-compilers-2022.1.0 iimkl-2022a/iimpi-2022a intel-2022a GCCcore-11.2.0 GCC-11.2.0 gfbf-2021b/gompi-2021b foss-2021b GCCcore-11.2.0 intel-compilers-2021.4.0 iimkl-2021b/iimpi-2021b intel-2021b GCCcore-10.3.0 GCC-10.3.0 gfbf-2021a/gompi-2021a foss-2021a GCCcore-10.3.0 intel-compilers-2021.2.0 iimkl-2021a/iimpi-2021a intel-2021a GCCcore-10.2.0 GCC-10.2.0 gfbf-2020b/gompi-2020b foss-2020b GCCcore-10.2.0 iccifort-2020.4.304 iimkl-2020b/iimpi-2020b intel-2020b

                  Example

                  we could load the following modules together:

                  ml XGBoost/1.7.2-foss-2022a\nml scikit-learn/1.1.2-foss-2022a\nml cURL/7.83.0-GCCcore-11.3.0\nml JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0\n

                  Another common error is:

                  $ module load cluster/donphan\nLmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').\n\nIf you don't understand the warning or error, contact the helpdesk at hpc@ugent.be\n

                  This is because there can only be one cluster module active at a time. The correct command is module swap cluster/donphan. See also Specifying the cluster on which to run.

                  "}, {"location": "troubleshooting/#illegal-instruction-error", "title": "Illegal instruction error", "text": ""}, {"location": "troubleshooting/#running-software-that-is-incompatible-with-host", "title": "Running software that is incompatible with host", "text": "

                  When running software provided through modules (see Modules), you may run into errors like:

                  $ module swap cluster/donphan\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n\n$ module load Python/3.10.8-GCCcore-12.2.0\n$ python\nPlease verify that both the operating system and the processor support\nIntel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.\n

                  or errors like:

                  $ python\nIllegal instruction\n

                  When we swap to a different cluster, the available modules change so they work for that cluster. That means that if the cluster and the login nodes have a different CPU architecture, software loaded using modules might not work.

                  If you want to test software on the login nodes, make sure the cluster/doduo module is loaded (with module swap cluster/doduo, see Specifying the cluster on which to run), since the login nodes and have the same CPU architecture.

                  If modules are already loaded, and then we swap to a different cluster, all our modules will get reloaded. This means that all current modules will be unloaded and then loaded again, so they'll work on the newly loaded cluster. Here's an example of how that would look like:

                  $ module load Python/3.10.8-GCCcore-12.2.0\n$ module swap cluster/donphan\n\nDue to MODULEPATH changes, the following have been reloaded:\n  1) GCCcore/12.2.0                   8) binutils/2.39-GCCcore-12.2.0\n  2) GMP/6.2.1-GCCcore-12.2.0         9) bzip2/1.0.8-GCCcore-12.2.0\n  3) OpenSSL/1.1                     10) libffi/3.4.4-GCCcore-12.2.0\n  4) Python/3.10.8-GCCcore-12.2.0    11) libreadline/8.2-GCCcore-12.2.0\n  5) SQLite/3.39.4-GCCcore-12.2.0    12) ncurses/6.3-GCCcore-12.2.0\n  6) Tcl/8.6.12-GCCcore-12.2.0       13) zlib/1.2.12-GCCcore-12.2.0\n  7) XZ/5.2.7-GCCcore-12.2.0\n\nThe following have been reloaded with a version change:\n  1) cluster/doduo => cluster/donphan         3) env/software/doduo => env/software/donphan\n  2) env/slurm/doduo => env/slurm/donphan     4) env/vsc/doduo => env/vsc/donphan\n

                  This might result in the same problems as mentioned above. When swapping to a different cluster, you can run module purge to unload all modules to avoid problems (see Purging all modules).

                  "}, {"location": "troubleshooting/#multi-job-submissions-on-a-non-default-cluster", "title": "Multi-job submissions on a non-default cluster", "text": "

                  When using a tool that is made available via modules to submit jobs, for example Worker, you may run into the following error when targeting a non-default cluster:

                  $  wsub\n/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction     (core dumped) ${PERL} ${DIR}/../lib/wsub.pl \"$@\"\n

                  When executing the module swap cluster command, you are not only changing your session environment to submit to that specific cluster, but also to use the part of the central software stack that is specific to that cluster. In the case of the Worker example above, the latter implies that you are running the wsub command on top of a Perl installation that is optimized specifically for the CPUs of the workernodes of that cluster, which may not be compatible with the CPUs of the login nodes, triggering the Illegal instruction error.

                  The cluster modules are split up into several env/* \"submodules\" to help deal with this problem. For example, by using module swap env/slurm/donphan instead of module swap cluster/donphan (starting from the default environment, the doduo cluster), you can update your environment to submit jobs to donphan, while still using the software installations that are specific to the doduo cluster (which are compatible with the login nodes since the doduo cluster workernodes have the same CPUs). The same goes for the other clusters as well of course.

                  Tip

                  To submit a Worker job to a specific cluster, like the donphan interactive cluster for instance, use:

                  $ module swap env/slurm/donphan \n
                  instead of
                  $ module swap cluster/donphan \n

                  We recommend using a module swap cluster command after submitting the jobs.

                  This to \"reset\" your environment to a sane state, since only having a different env/slurm module loaded can also lead to some surprises if you're not paying close attention.

                  "}, {"location": "useful_linux_commands/", "title": "Useful Linux Commands", "text": ""}, {"location": "useful_linux_commands/#basic-linux-usage", "title": "Basic Linux Usage", "text": "

                  All the HPC clusters run some variant of the \"Red Hat Enterprise Linux\" operating system. This means that, when you connect to one of them, you get a command line interface, which looks something like this:

                  vsc40000@ln01[203] $\n

                  When you see this, we also say you are inside a \"shell\". The shell will accept your commands, and execute them.

                  Command Description ls Shows you a list of files in the current directory cd Change current working directory rm Remove file or directory echo Prints its parameters to the screen nano Text editor

                  Most commands will accept or even need parameters, which are placed after the command, separated by spaces. A simple example with the \"echo\" command:

                  $ echo This is a test\nThis is a test\n

                  Important here is the \"$\" sign in front of the first line. This should not be typed, but is a convention meaning \"the rest of this line should be typed at your shell prompt\". The lines not starting with the \"$\" sign are usually the feedback or output from the command.

                  More commands will be used in the rest of this text, and will be explained then if necessary. If not, you can usually get more information about a command, say the item or command \"ls\", by trying either of the following:

                  $ ls --help \n$ man ls\n$ info ls\n

                  (You can exit the last two \"manuals\" by using the \"q\" key.) For more exhaustive tutorials about Linux usage, please refer to the following sites: http://www.linux.org/lessons/ http://linux.about.com/od/nwb_guide/a/gdenwb06.htm

                  "}, {"location": "useful_linux_commands/#how-to-get-started-with-shell-scripts", "title": "How to get started with shell scripts", "text": "

                  In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Scripts are basically non-compiled pieces of code: they are just text files. Since they don't contain machine code, they are executed by what is called a \"parser\" or an \"interpreter\". This is another program that understands the command in the script, and converts them to machine code. There are many kinds of scripting languages, including Perl and Python.

                  Another very common scripting language is shell scripting. In a shell script, you will put the commands you would normally type at your shell prompt in the same order. This will enable you to execute all those commands at any time by only issuing one command: starting the script.

                  Typically in the following examples they'll have on each line the next command to be executed although it is possible to put multiple commands on one line. A very simple example of a script may be:

                  echo \"Hello! This is my hostname:\" \nhostname\n

                  You can type both lines at your shell prompt, and the result will be the following:

                  $ echo \"Hello! This is my hostname:\"\nHello! This is my hostname:\n$ hostname\ngligar07.gastly.os\n

                  Suppose we want to call this script \"foo\". You open a new file for editing, and name it \"foo\", and edit it with your favourite editor

                  nano foo\n

                  or use the following commands:

                  echo \"echo 'Hello! This is my hostname:'\" > foo\necho hostname >> foo\n

                  The easiest ways to run a script is by starting the interpreter and pass the script as parameter. In case of our script, the interpreter may either be \"sh\" or \"bash\" (which are the same on the cluster). So start the script:

                  $ bash foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  Congratulations, you just created and started your first shell script!

                  A more advanced way of executing your shell scripts is by making them executable by their own, so without invoking the interpreter manually. The system can not automatically detect which interpreter you want, so you need to tell this in some way. The easiest way is by using the so called \"shebang\" notation, explicitly created for this function: you put the following line on top of your shell script \"#!/path/to/your/interpreter\".

                  You can find this path with the \"which\" command. In our case, since we use bash as an interpreter, we get the following path:

                  $ which bash\n/bin/bash\n

                  We edit our script and change it with this information:

                  #!/bin/bash\necho \"Hello! This is my hostname:\"\nhostname\n

                  Note that the \"shebang\" must be the first line of your script! Now the operating system knows which program should be started to run the script.

                  Finally, we tell the operating system that this script is now executable. For this we change its file attributes:

                  chmod +x foo\n

                  Now you can start your script by simply executing it:

                  $ ./foo\nHello! This is my hostname:\ngligar07.gastly.os\n

                  The same technique can be used for all other scripting languages, like Perl and Python.

                  Most scripting languages understand that lines beginning with \"#\" are comments, and should be ignored. If the language you want to use does not ignore these lines, you may get strange results ...

                  "}, {"location": "useful_linux_commands/#linux-quick-reference-guide", "title": "Linux Quick reference Guide", "text": ""}, {"location": "useful_linux_commands/#archive-commands", "title": "Archive Commands", "text": "Command Description tar An archiving program designed to store and extract files from an archive known as a tar file. tar -cvf foo.tar foo/ Compress the contents of foo folder to foo.tar tar -xvf foo.tar Extract foo.tar tar -xvzf foo.tar.gz Extract gzipped foo.tar.gz"}, {"location": "useful_linux_commands/#basic-commands", "title": "Basic Commands", "text": "Command Description ls Shows you a list of files in the current directory cd Change the current directory rm Remove file or directory mv Move file or directory echo Display a line or text pwd Print working directory mkdir Create directories rmdir Remove directories"}, {"location": "useful_linux_commands/#editor", "title": "Editor", "text": "Command Description emacs nano Nano's ANOther editor, an enhanced free Pico clone vi A programmer's text editor"}, {"location": "useful_linux_commands/#file-commands", "title": "File Commands", "text": "Command Description cat Read one or more files and print them to standard output cmp Compare two files byte by byte cp Copy files from a source to the same or different target(s) du Estimate disk usage of each file and recursively for directories find Search for files in directory hierarchy grep Print lines matching a pattern ls List directory contents mv Move file to different targets rm Remove files sort Sort lines of text files wc Print the number of new lines, words, and bytes in files"}, {"location": "useful_linux_commands/#help-commands", "title": "Help Commands", "text": "Command Description man Displays the manual page of a command with its name, synopsis, description, author, copyright, etc."}, {"location": "useful_linux_commands/#network-commands", "title": "Network Commands", "text": "Command Description hostname Show or set the system's host name ifconfig Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. ping Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not."}, {"location": "useful_linux_commands/#other-commands", "title": "Other Commands", "text": "Command Description logname Print user's login name quota Display disk usage and limits which Returns the pathnames of the files that would be executed in the current environment whoami Displays the login name of the current effective user"}, {"location": "useful_linux_commands/#process-commands", "title": "Process Commands", "text": "Command Description & In order to execute a command in the background, place an ampersand (&) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. at Executes commands at a specified time bg Places a suspended job in the background crontab A file which contains the schedule of entries to run at specified times fg A process running in the background will be processed in the foreground jobs Lists the jobs being run in the background kill Cancels a job running in the background; it takes either the user job number or the system process number as an argument ps Reports a snapshot of the current processes top Displays Linux tasks"}, {"location": "useful_linux_commands/#user-account-commands", "title": "User Account Commands", "text": "Command Description chmod Modify properties for users"}, {"location": "web_portal/", "title": "Using the HPC-UGent web portal", "text": "

                  The HPC-UGent web portal provides \"one stop shop\" for the HPC-UGent infrastructure. It is based on Open OnDemand (or OoD for short).

                  Via this web portal you can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, and connect via SSH, all via a standard web browser like Firefox, Chrome or Safari. You do not need to install or configure any client software, and no SSH key is required to connect to your VSC account via this web portal.\\ Please note that we do recommend to use our interactive and debug cluster (see chapter interactive and debug cluster) with OoD.

                  To connect to the HPC-UGent infrastructure via the web portal, visit https://login.hpc.ugent.be

                  Note that you may only see a \"Submitting...\" message appear for a couple of seconds, which is perfectly normal.

                  Through this web portal, you can:

                  • browse through the files & directories in your VSC account, and inspect, manage or change them;

                  • consult active jobs (across all HPC-UGent Tier-2 clusters);

                  • submit new jobs to the HPC-UGent Tier-2 clusters, either from existing job scripts or from job templates;

                  • start an interactive graphical user interface (a desktop environment), either on the login nodes or on a cluster workernode;

                  • open a terminal session directly in your web browser;

                  More detailed information is available below, as well as in the Open OnDemand documentation. A walkthrough video is available on YouTube here.

                  "}, {"location": "web_portal/#pilot-access", "title": "Pilot access", "text": ""}, {"location": "web_portal/#known-issues-limitations", "title": "Known issues & limitations", "text": ""}, {"location": "web_portal/#limited-resources", "title": "Limited resources", "text": "

                  All web portal sessions are currently served through a single separate login node, so the available resources are relatively limited. We will monitor the resources used by the active web portal sessions throughout the pilot phase to evaluate whether more resources are required.

                  "}, {"location": "web_portal/#login", "title": "Login", "text": "

                  When visiting the HPC-UGent web portal you will be automatically logged in via the VSC accountpage (see also Section\u00a0Applying for the account).

                  "}, {"location": "web_portal/#first-login", "title": "First login", "text": "

                  The first time you visit https://login.hpc.ugent.be permission will be requested to let the web portal access some of your personal information (VSC login ID, account status, login shell and institute name), as shown in this screenshot below:

                  Please click \"Authorize\" here.

                  This request will only be made once, you should not see this again afterwards.

                  "}, {"location": "web_portal/#start-page", "title": "Start page", "text": "

                  Once logged in, you should see this start page:

                  This page includes a menu bar at the top, with buttons on the left providing access to the different features supported by the web portal, as well as a Help menu, your VSC account name, and a Log Out button on the top right, and the familiar HPC-UGent welcome message with a high-level overview of the HPC-UGent Tier-2 clusters.

                  If your browser window is too narrow, the menu is available at the top right through the \"hamburger\" icon:

                  "}, {"location": "web_portal/#features", "title": "Features", "text": "

                  We briefly cover the different features provided by the web portal, going from left to right in the menu bar at the top.

                  "}, {"location": "web_portal/#file-browser", "title": "File browser", "text": "

                  Via the Files drop-down menu at the top left, you can browse through the files and directories in your VSC account using an intuitive interface that is similar to a local file browser, and manage, inspect or change them.

                  The drop-down menu provides short-cuts to the different $VSC_* directories and filesystems you have access to. Selecting one of the directories will open a new browser tab with the File Explorer:

                  Here you can:

                  • Click a directory in the tree view on the left to open it;

                  • Use the buttons on the top to:

                    • go to a specific subdirectory by typing in the path (via Go To...);

                    • open the current directory in a terminal (shell) session (via Open in Terminal);

                    • create a new file (via New File) or subdirectory (via New Dir) in the current directory;

                    • upload files or directories from your local workstation into your VSC account, in the correct directory (via Upload);

                    • show hidden files and directories, of which the name starts with a dot (.) (via Show Dotfiles);

                    • show the owner and permissions in the file listing (via Show Owner/Mode);

                  • Double-click a directory in the file listing to open that directory;

                  • Select one or more files and/or directories in the file listing, and:

                    • use the View button to see the contents (use the button at the top right to close the resulting popup window);

                    • use the Edit button to open a simple file editor in a new browser tab which you can use to make changes to the selected file and save them;

                    • use the Rename/Move button to rename or move the selected files and/or directories to a different location in your VSC account;

                    • use the Download button to download the selected files and directories from your VSC account to your local workstation;

                    • use the Copy button to copy the selected files and/or directories, and then use the Paste button to paste them in a different location;

                    • use the (Un)Select All button to select (or unselect) all files and directories in the current directory;

                    • use the Delete button to (permanently!) remove the selected files and directories;

                  For more information, see also https://www.osc.edu/resources/online_portals/ondemand/file_transfer_and_management.

                  "}, {"location": "web_portal/#job-management", "title": "Job management", "text": "

                  Via the Jobs menu item, you can consult your active jobs or submit new jobs using the Job Composer.

                  For more information, see the sections below as well as https://www.osc.edu/resources/online_portals/ondemand/job_management.

                  "}, {"location": "web_portal/#active-jobs", "title": "Active jobs", "text": "

                  To get an overview of all your currently active jobs, use the Active Jobs menu item under Jobs.

                  A new browser tab will be opened that shows all your current queued and/or running jobs:

                  You can control which jobs are shown using the Filter input area, or select a particular cluster from the drop-down menu All Clusters, both at the top right.

                  Jobs that are still queued or running can be deleted using the red button on the right.

                  Completed jobs will also be visible in this interface, but only for a short amount of time after they have stopped running.

                  For each listed job, you can click on the arrow ($>$) symbol to get a detailed overview of that job, and get quick access to the corresponding output directory (via the Open in File Manager and Open in Terminal buttons at the bottom of the detailed overview).

                  "}, {"location": "web_portal/#job-composer", "title": "Job composer", "text": "

                  To submit new jobs, you can use the Job Composer menu item under Jobs. This will open a new browser tab providing an interface to create new jobs:

                  This extensive interface allows you to create jobs from one of the available templates, or by copying an existing job.

                  You can carefully prepare your job and the corresponding job script via the Job Options button and by editing the job script (see lower right).

                  Don't forget to actually submit your job to the system via the green Submit button!

                  "}, {"location": "web_portal/#job-templates", "title": "Job templates", "text": "

                  In addition, you can inspect provided job templates, copy them or even create your own templates via the Templates button on the top:

                  "}, {"location": "web_portal/#shell-access", "title": "Shell access", "text": "

                  Through the Shell Access button that is available under the Clusters menu item, you can easily open a terminal (shell) session into your VSC account, straight from your browser!

                  Using this interface requires being familiar with a Linux shell environment (see Appendix\u00a0Useful Linux Commands).

                  To exit the shell session, type exit followed by Enter and then close the browser tab.

                  Note that you can not access a shell session after you closed a browser tab, even if you didn't exit the shell session first (unless you use terminal multiplexer tool like screen or tmux).

                  "}, {"location": "web_portal/#interactive-applications", "title": "Interactive applications", "text": ""}, {"location": "web_portal/#graphical-desktop-environment", "title": "Graphical desktop environment", "text": "

                  To create a graphical desktop environment, use on of the desktop on... node buttons under Interactive Apps menu item. For example:

                  You can either start a desktop environment on a login node for some lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2 clusters if more resources are required. Keep in mind that for desktop sessions on a workernode the regular queueing times are applicable dependent on requested resources.

                  Do keep in mind that desktop environments on a cluster workernode are limited to a maximum of 72 hours, just like regular jobs are.

                  To access the desktop environment, click the My Interactive Sessions menu item at the top, and then use the Launch desktop on ... node button if the desktop session is Running:

                  "}, {"location": "web_portal/#jupyter-notebook", "title": "Jupyter notebook", "text": "

                  See dedicated page on Jupyter notebooks

                  "}, {"location": "web_portal/#restarting-your-web-server-in-case-of-problems", "title": "Restarting your web server in case of problems", "text": "

                  In case of problems with the web portal, it could help to restart the web server running in your VSC account.

                  You can do this via the Restart Web Server button under the Help menu item:

                  Of course, this only affects your own web portal session (not those of others).

                  "}, {"location": "web_portal/#custom-apps", "title": "Custom apps", "text": "
                  • ABAQUS for CAE course
                  "}, {"location": "x2go/", "title": "Graphical applications with X2Go", "text": "

                  X2Go is a graphical desktop software for Linux similar to VNC but with extra advantages. It does not require to execute a server in the login node and it is possible to setup a SSH proxy to connect to an specific login node. It can also be used to access Windows, Linux and macOS desktops. X2Go provides several advantages such:

                  1. A graphical remote desktop that works well over low bandwidth connections.

                  2. Copy/paste support from client to server and vice-versa.

                  3. File sharing from client to server.

                  4. Support for sound.

                  5. Printer sharing from client to server.

                  6. The ability to access single applications by specifying the name of the desired executable like a terminal or an internet browser.

                  "}, {"location": "x2go/#install-x2go-client", "title": "Install X2Go client", "text": "

                  X2Go is available for several operating systems. You can download the latest client from https://wiki.x2go.org/doku.php/doc:installation:x2goclient.

                  X2Go requires a valid private SSH key to connect to the login node, this is described in How do SSH keys work?. This section also describes how to use X2Go client with a SSH agent. The SSH agent setup is optional but it is the easiest way to connect to the login nodes using several SSH keys and applications. Please see Using an SSH agent (optional) if you want to know how to setup an SSH agent in your system.

                  "}, {"location": "x2go/#create-a-new-x2go-session", "title": "Create a new X2Go session", "text": "

                  After the X2Go client installation just start the client. When you launch the client for the first time, it will start the new session dialogue automatically.

                  There are two ways to connect to the login node:

                  • Option A: A direct connection to \"login.hpc.ugent.be\". This is the simpler option, the system will decide which login node to use based on a load-balancing algorithm.

                  • Option B: You can use the node \"login.hpc.ugent.be\" as SSH proxy to connect to a specific login node. Use this option if you want to resume an old X2Go session.

                  "}, {"location": "x2go/#option-a-direct-connection", "title": "Option A: direct connection", "text": "

                  This is the easier way to setup X2Go, a direct connection to the login node.

                  1. Include a session name. This will help you to identify the session if you have more than one, you can choose any name (in our example \"HPC login node\").

                  2. Set the login hostname (In our case: \"login.hpc.ugent.be\")

                  3. Set the Login name. In the example is \"vsc40000\" but you must change it by your current VSC account.

                  4. Set the SSH port (22 by default).

                  5. Skip this step if you are using an SSH agent (see Install X2Go). If not add your SSH private key into \"Use RSA/DSA key..\" field. In this case:

                    1. Click on the \"Use RSA/DSA..\" folder icon. This will open a file browser.

                    2. You should look for your private SSH key generated in Generating a public/private key pair. This file has been stored in the directory \"~/.ssh/\" (by default \"id_rsa\"). \".ssh\" is an invisible directory, the Finder will not show it by default. The easiest way to access the folder, is by pressing cmd+shift+g , which will allow you to enter the name of a directory, which you would like to open in Finder. Here, type \"/.ssh\" and press enter. Choose that file and click on open .

                  6. Check \"Try autologin\" option.

                  7. Choose Session type to XFCE. Only the XFCE desktop is available for the moment. It is also possible to choose single applications instead of a full desktop, like the Terminal or Internet browser (you can change this option later directly from the X2Go session tab if you want).

                    1. [optional]: Set a single application like Terminal instead of XFCE desktop.

                  8. [optional]: Change the session icon.

                  9. Click the OK button after these changes.

                  "}, {"location": "x2go/#option-b-use-the-login-node-as-ssh-proxy", "title": "Option B: use the login node as SSH proxy", "text": "

                  This option is useful if you want to resume a previous session or if you want to set explicitly the login node to use. In this case you should include a few more options. Use the same Option A setup but with these changes:

                  1. Include a session name. This will help you to identify the session if you have more than one (in our example \"HPC UGent proxy login\").

                  2. Set the login hostname. This is the login node that you want to use at the end (In our case: \"gligar07.gastly.os\")

                  3. Set \"Use Proxy server..\" to enable the proxy. Within \"Proxy section\" set also these options:

                    1. Set Type \"SSH\", \"Same login\", \"Same Password\" and \"SSH agent\" options.

                    2. Set Host to \"login.hpc.ugent.be\" within \"Proxy Server\" section as well.

                    3. Skip this step if you are using an SSH agent (see Install X2Go). Add your private SSH key within \"RSA/DSA key\" field within \"Proxy Server\" as you did for the server configuration (The \"RSA/DSA key\" field must be set in both sections)

                    4. Click the OK button after these changes.

                  "}, {"location": "x2go/#connect-to-your-x2go-session", "title": "Connect to your X2Go session", "text": "

                  Just click on any session that you already have to start/resume any session. It will take a few seconds to open the session the first time. It is possible to terminate a session if you logout from the current open session or if you click on the \"shutdown\" button from X2Go. If you want to suspend your session to continue working with it later just click on the \"pause\" icon.

                  X2Go will keep the session open for you (but only if the login node is not rebooted).

                  "}, {"location": "x2go/#resume-a-previous-session", "title": "Resume a previous session", "text": "

                  If you want to re-connect to the same login node, or resume a previous session, you should know which login node were used at first place. You can get this information before logging out from your X2Go session. Just open a terminal and execute:

                  hostname\n

                  This will give you the full login name (like \"gligar07.gastly.os\" but the hostname in your situation may be slightly different). You should set the same name to resume the session the next time. Just add this full hostname into \"login hostname\" section in your X2Go session (see Option B: use the login node as SSH proxy).

                  "}, {"location": "x2go/#connection-failed-with-x2go", "title": "Connection failed with X2Go", "text": "

                  If you get the error \"Connection failed session vscXXYYY-123-4567890123_xyzXFCE_dp32 terminated\" (or similar), It is possible that an old X2Go session remained on the login node. First, choose a different session type (for example TERMINAL), then start the X2Go session. A window will pop up, and you should see that a session is running. Select the session and terminate it. Then finish the session, choose again XFCE session (or whatever you use), then you should have your X2Go session. Since we have multiple login nodes, you might have to repeat these steps multiple times.

                  "}, {"location": "xdmod/", "title": "XDMoD portal", "text": "

                  The XDMoD web portal provides information about completed jobs, storage usage and the HPC UGent cloud infrastructure usage.

                  To connect to the XDMoD portal, turn on your VPN connection to UGent and visit https://shieldon.ugent.be/xdmod.

                  Note that you may need to authorise XDMoD to obtain information from your VSC account through the VSC accountpage.

                  After you log in for the first time, you can take the tour, where the web application shows you several features through a series of tips.

                  Located in the upper right corner of the web page is the help button, taking you to the XDMoD User Manual. As things may change, we recommend checking out the provided documenation for information on XDMoD use https://shieldon.ugent.be/xdmod/user_manual/index.php.

                  "}, {"location": "examples/Getting_Started/tensorflow_mnist/", "title": "Index", "text": "

                  TensorFlow example copied from https://github.com/EESSI/eessi-demo/tree/main/TensorFlow

                  Loads MNIST datasets and trains a neural network to recognize hand-written digits.

                  Runtime: ~1 min. on 8 cores (Intel Skylake)

                  See https://www.tensorflow.org/tutorials/quickstart/beginner

                  "}, {"location": "linux-tutorial/", "title": "Introduction", "text": "

                  Welcome to the Linux tutorial, a comprehensive guide designed to give you essential skills for smooth interaction within a Linux environment.

                  These skills are important to the HPC-UGent infrastructure, which operates on Red Hat Enterprise Linux. For more information see introduction to HPC.

                  The guide aims to make you familiar with the Linux command line environment quickly.

                  The tutorial goes through the following steps:

                  1. Getting Started
                  2. Navigating
                  3. Manipulating files and directories
                  4. Uploading files
                  5. Beyond the basics

                  Do not forget Common pitfalls, as this can save you some troubleshooting.

                  "}, {"location": "linux-tutorial/#useful-topics", "title": "Useful topics", "text": "
                  • More on the HPC infrastructure.
                  • Cron Scripts: run scripts automatically at periodically fixed times, dates, or intervals.
                  "}, {"location": "linux-tutorial/beyond_the_basics/", "title": "Beyond the basics", "text": "

                  Now that you've seen some of the more basic commands, let's take a look at some of the deeper concepts and commands.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#inputoutput", "title": "Input/output", "text": "

                  To redirect output to files, you can use the redirection operators: >, >>, &>, and <.

                  First, it's important to make a distinction between two different output channels:

                  1. stdout: standard output channel, for regular output

                  2. stderr: standard error channel, for errors and warnings

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stdout", "title": "Redirecting stdout", "text": "

                  > writes the (stdout) output of a command to a file and overwrites whatever was in the file before.

                  $ echo hello > somefile\n$ cat somefile\nhello\n$ echo hello2 > somefile\n$ cat somefile\nhello2\n

                  >> appends the (stdout) output of a command to a file; it does not clobber whatever was in the file before:

                  $ echo hello > somefile\n$ cat somefile \nhello\n$ echo hello2 >> somefile\n$ cat somefile\nhello\nhello2\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#reading-from-stdin", "title": "Reading from stdin", "text": "

                  < reads a file from standard input (piped or typed input). So you would use this to simulate typing into a terminal. < somefile.txt is largely equivalent to cat somefile.txt |.

                  One common use might be to take the results of a long-running command and store the results in a file, so you don't have to repeat it while you refine your command line. For example, if you have a large directory structure you might save a list of all the files you're interested in and then reading in the file list when you are done:

                  $ find . -name .txt > files\n$ xargs grep banana < files\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#redirecting-stderr", "title": "Redirecting stderr", "text": "

                  To redirect the stderr output (warnings, messages), you can use 2>, just like >

                  $ ls one.txt nosuchfile.txt 2> errors.txt\none.txt\n$ cat errors.txt\nls: nosuchfile.txt: No such file or directory\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#combining-stdout-and-stderr", "title": "Combining stdout and stderr", "text": "

                  To combine both output channels (stdout and stderr) and redirect them to a single file, you can use &>

                  $ ls one.txt nosuchfile.txt &> ls.out\n$ cat ls.out\nls: nosuchfile.txt: No such file or directory\none.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#command-piping", "title": "Command piping", "text": "

                  Part of the power of the command line is to string multiple commands together to create useful results. The core of these is the pipe: |. For example, to see the number of files in a directory, we can pipe the (stdout) output of ls to wc (word count, but can also be used to count the number of lines with the -l flag).

                  $ ls | wc -l\n    42\n

                  A common pattern is to pipe the output of a command to less so you can examine or search the output:

                  $ find . | less\n

                  Or to look through your command history:

                  $ history | less\n

                  You can put multiple pipes in the same line. For example, which cp commands have we run?

                  $ history | grep cp | less\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shell-expansion", "title": "Shell expansion", "text": "

                  The shell will expand certain things, including:

                  1. * wildcard: for example ls t*txt will list all files starting with 't' and ending in 'txt'

                  2. tab completion: hit the <tab> key to make the shell complete your command line; works for completing file names, command names, etc.

                  3. $... or ${...}: environment variables will be replaced with their value; example: echo \"I am $USER\" or echo \"I am ${USER}\"

                  4. square brackets can be used to list a number of options for a particular characters; example: ls *.[oe][0-9]. This will list all files starting with whatever characters (*), then a dot (.), then either an 'o' or an 'e' ([oe]), then a character from '0' to '9' (so any digit) ([0-9]). So this filename will match: anything.o5, but this one won't: anything.o52.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#process-information", "title": "Process information", "text": ""}, {"location": "linux-tutorial/beyond_the_basics/#ps-and-pstree", "title": "ps and pstree", "text": "

                  ps lists processes running. By default, it will only show you the processes running in the local shell. To see all of your processes running on the system, use:

                  $ ps -fu $USER\n

                  To see all the processes:

                  $ ps -elf\n

                  To see all the processes in a forest view, use:

                  $ ps auxf\n

                  The last two will spit out a lot of data, so get in the habit of piping it to less.

                  pstree is another way to dump a tree/forest view. It looks better than ps auxf but it has much less information so its value is limited.

                  pgrep will find all the processes where the name matches the pattern and print the process IDs (PID). This is used in piping the processes together as we will see in the next section.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#kill", "title": "kill", "text": "

                  ps isn't very useful unless you can manipulate the processes. We do this using the kill command. Kill will send a message (SIGINT) to the process to ask it to stop.

                  $ kill 1234\n$ kill $(pgrep misbehaving_process)\n

                  Usually, this ends the process, giving it the opportunity to flush data to files, etc. However, if the process ignored your signal, you can send it a different message (SIGKILL) which the OS will use to unceremoniously terminate the process:

                  $ kill -9 1234\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#top", "title": "top", "text": "

                  top is a tool to see the current status of the system. You've probably used something similar in Task Manager on Windows or Activity Monitor in macOS. top will update every second and has a few interesting commands.

                  To see only your processes, type u and your username after starting top, (you can also do this with top -u $USER ). The default is to sort the display by %CPU. To change the sort order, use < and > like arrow keys.

                  There are a lot of configuration options in top, but if you're interested in seeing a nicer view, you can run htop instead. Be aware that it's not installed everywhere, while top is.

                  To exit top, use q (for 'quit').

                  For more information, see Brendan Gregg's excellent site dedicated to performance analysis.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#ulimit", "title": "ulimit", "text": "

                  ulimit is a utility to get or set user limits on the machine. For example, you may be limited to a certain number of processes. To see all the limits that have been set, use:

                  $ ulimit -a\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#counting-wc", "title": "Counting: wc", "text": "

                  To count the number of lines, words, and characters (or bytes) in a file, use wc (word count):

                  $ wc example.txt\n      90     468     3189   example.txt\n

                  The output indicates that the file named example.txt contains 90 lines, 468 words, and 3189 characters/bytes.

                  To only count the number of lines, use wc -l:

                  $ wc -l example.txt\n      90    example.txt\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#searching-file-contents-grep", "title": "Searching file contents: grep", "text": "

                  grep is an important command. It was originally an abbreviation for \"globally search a regular expression and print\" but it's entered the common computing lexicon and people use 'grep' to mean searching for anything. To use grep, you give a pattern and a list of files.

                  $ grep banana fruit.txt\n$ grep banana fruit_bowl1.txt fruit_bowl2.txt\n$ grep banana fruit*txt\n

                  grep also lets you search for Regular Expressions, but these are not in scope for this introductory text.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#cut", "title": "cut", "text": "

                  cut is used to pull fields out of files or pipes streams. It's a useful glue when you mix it with grep because grep can find the lines where a string occurs and cut can pull out a particular field. For example, to pull the first column (-f 1, the first field) from (an unquoted) CSV (comma-separated values, so -d ',': delimited by ,) file, you can use the following:

                  $ cut -f 1 -d ',' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#sed", "title": "sed", "text": "

                  sed is the stream editor. It is used to replace text in a file or piped stream. In this way, it works like grep, but instead of just searching, it can also edit files. This is like \"Search and Replace\" in a text editor. sed has a lot of features, but almost everyone uses the extremely basic version of string replacement:

                  $ sed 's/oldtext/newtext/g' myfile.txt\n

                  By default, sed will just print the results. If you want to edit the file inplace, use -i, but be very careful that the results will be what you want before you go around destroying your data!

                  "}, {"location": "linux-tutorial/beyond_the_basics/#awk", "title": "awk", "text": "

                  awk is a basic language that builds on sed to do much more advanced stream editing. Going in depth is far out of scope of this tutorial, but there are two examples that are worth knowing.

                  First, cut is very limited in pulling fields apart based on whitespace. For example, if you have padded fields then cut -f 4 -d ' ' will almost certainly give you a headache as there might be an uncertain number of spaces between each field. awk does better whitespace splitting. So, pulling out the fourth field in a whitespace delimited file is as follows:

                  $ awk '{print $4}' mydata.dat\n

                  You can use -F ':' to change the delimiter (F for field separator).

                  The next example is used to sum numbers from a field:

                  $ awk -F ',' '{sum += $1} END {print sum}' mydata.csv\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#basic-shell-scripting", "title": "Basic Shell Scripting", "text": "

                  The basic premise of a script is to execute automate the execution of multiple commands. If you find yourself repeating the same commands over and over again, you should consider writing one script to do the same. A script is nothing special, it is just a text file like any other. Any commands you put in there will be executed from the top to bottom.

                  However, there are some rules you need to abide by.

                  Here is a very detailed guide should you need more information.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#shebang", "title": "Shebang", "text": "

                  The first line of the script is the so-called shebang (# is sometimes called hash and ! is sometimes called bang). This line tells the shell which command should execute the script. In most cases, this will simply be the shell itself. The line itself looks a bit weird, but you can copy-paste this line as you need not worry about it further. It is however very important this is the very first line of the script! These are all valid shebangs, but you should only use one of them:

                  #!/bin/sh\n
                  #!/bin/bash\n
                  #!/usr/bin/env bash\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#conditionals", "title": "Conditionals", "text": "

                  Sometimes you only want certain commands to be executed when a certain condition is met. For example, only move files to a directory if that directory exists. The syntax:

                  if [ -d directory ] && [ -f file ]\nthen mv file directory fi\n\nOr you only want to do something if a file exists:\n\nif [ -f filename ] then echo \"it exists\" fi\n
                  Or only if a certain variable is bigger than one:
                  if [ $AMOUNT -gt 1 ]\nthen\necho \"More than one\"\n# more commands\nfi\n
                  Several pitfalls exist with this syntax. You need spaces surrounding the brackets, the then needs to be at the beginning of a line. It is best to just copy this example and modify it.

                  In the initial example, we used -d to test if a directory existed. There are several more checks.

                  Another useful example, is to test if a variable contains a value (so it's not empty):

                  if [ -z $PBS_ARRAYID ]\nthen\necho \"Not an array job, quitting.\"\nexit 1\nfi\n

                  the -z will check if the length of the variable's value is greater than zero.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#loops", "title": "Loops", "text": "

                  Are you copy-pasting commands? Are you doing the same thing with just different options? You most likely can simplify your script by using a loop.

                  Let's look at a simple example:

                  for i in 1 2 3\ndo\necho $i\ndone\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#subcommands", "title": "Subcommands", "text": "

                  Subcommands are used all the time in shell scripts. What they do is storing the output of a command in a variable. So this can later be used in a conditional or a loop for example.

                  CURRENTDIR=`pwd`  # using backticks\nCURRENTDIR=$(pwd)  # recommended (easier to type)\n

                  In the above example you can see the 2 different methods of using a subcommand. pwd will output the current working directory, and its output will be stored in the CURRENTDIR variable. The recommended way to use subcommands is with the $() syntax.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#errors", "title": "Errors", "text": "

                  Sometimes some things go wrong and a command or script you ran causes an error. How do you properly deal with these situations?

                  Firstly a useful thing to know for debugging and testing is that you can run any command like this:

                  command 2>&1 output.log   # one single output file, both output and errors\n

                  If you add 2>&1 output.log at the end of any command, it will combine stdout and stderr, outputting it into a single file named output.log.

                  If you want regular and error output separated you can use:

                  command > output.log 2> output.err  # errors in a separate file\n

                  this will write regular output to output.log and error output to output.err.

                  You can then look for the errors with less or search for specific text with grep.

                  In scripts, you can use:

                  set -e\n

                  This will tell the shell to stop executing any subsequent commands when a single command in the script fails. This is most convenient as most likely this causes the rest of the script to fail as well.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#advanced-error-checking", "title": "Advanced error checking", "text": "

                  Sometimes you want to control all the error checking yourself, this is also possible. Everytime you run a command, a special variable $? is used to denote successful completion of the command. A value other than zero signifies something went wrong. So an example use case:

                  command_with_possible_error\nexit_code=$?  # capture exit code of last command\nif [ $exit_code -ne 0 ]\nthen\necho \"something went wrong\"\nfi\n

                  "}, {"location": "linux-tutorial/beyond_the_basics/#bashrc-login-script", "title": ".bashrc login script", "text": "

                  If you have certain commands executed every time you log in (which includes every time a job starts), you can add them to your $HOME/.bashrc file. This file is a shell script that gets executed every time you log in.

                  Examples include:

                  • modifying your $PS1 (to tweak your shell prompt)

                  • printing information about the current/jobs environment (echoing environment variables, etc.)

                  • selecting a specific cluster to run on with module swap cluster/...

                  Some recommendations:

                  • Avoid using module load statements in your $HOME/.bashrc file

                  • Don't directly edit your .bashrc file: if there's an error in your .bashrc file, you might not be able to log in again. To prevent that, use another file to test your changes, then copy them over when you tested the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#scripting-for-the-cluster", "title": "Scripting for the cluster", "text": "

                  When writing scripts to be submitted on the cluster there are some tricks you need to keep in mind.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#example-job-script", "title": "Example job script", "text": "
                  #!/bin/bash\n#PBS -l nodes=1:ppn=1\n#PBS -N FreeSurfer_per_subject-time-longitudinal\n#PBS -l walltime=48:00:00\n#PBS -q long\n#PBS -m abe\n#PBS -j oe\nexport DATADIR=$VSC_DATA/example\n# $PBS_JOBID is unique for each job, so this creates a unique directory\nexport WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID\nmkdir -p $WORKDIR\n# copy files to local storage\ncp -a $DATADIR/workfiles $WORKDIR/\n\n# load software we need\nmodule load FreeSurfer\ncd $WORKDIR\n# recon-all ... &> output.log  # this command takes too long, let's show a more practical example\necho $PBS_ARRAYID > $WORKDIR/$PBS_ARRAYID.txt\n# create results directory if necessary\nmkdir -p $DATADIR/results\n# copy work files back\ncp $WORKDIR/$PBS_ARRAYID.txt $DATADIR/results/\n
                  "}, {"location": "linux-tutorial/beyond_the_basics/#pbs-pragmas", "title": "PBS pragmas", "text": "

                  The scheduler needs to know about the requirements of the script, for example: how much memory will it use, and how long will it run. These things can be specified inside a script with what we call PBS pragmas.

                  This pragma (a pragma is a special comment) tells PBS to use 1 node and core:

                  #PBS -l nodes=1:ppn=1 # single-core\n

                  For parallel software, you can request multiple cores (OpenMP) and/or multiple nodes (MPI). Only use this when the software you use is capable of working in parallel. Here is an example:

                  #PBS -l nodes=1:ppn=16  # single-node, multi-core\n#PBS -l nodes=5:ppn=16  # multi-node\n

                  We intend to submit it on the long queue:

                  #PBS -q long\n

                  We request a total running time of 48 hours (2 days).

                  #PBS -l walltime=48:00:00\n

                  We specify a desired name of our job:

                  #PBS -N FreeSurfer_per_subject-time-longitudinal\n
                  This specifies mail options:
                  #PBS -m abe\n

                  1. a means mail is sent when the job is aborted.

                  2. b means mail is sent when the job begins.

                  3. e means mail is sent when the job ends.

                  Joins error output with regular output:

                  #PBS -j oe\n

                  All of these options can also be specified on the command-line and will overwrite any pragmas present in the script.

                  "}, {"location": "linux-tutorial/beyond_the_basics/#exercises", "title": "Exercises", "text": "
                  1. Create a file that contains this message: \"Hello, I am <user>\", where <user> is replaced by your username. Don't cheat by using an editor, use a command to create the file.

                  2. Use another command to add this line to the same file: \"I am on system <hostname> in directory <current\u00a0directory>\". Words between <> should be replaced with their value (hint: use environment variables).

                  3. How many files and directories are in /tmp?

                  4. What's the name of the 5th file/directory in alphabetical order in /tmp?

                  5. List all files that start with t in /tmp.

                  6. Create a file containing \"My home directory <home> is available using $HOME\". <home> should be replaced with your home directory, but $HOME should remain as-is.

                  7. How many processes are you currently running? How many are you allowed to run? Where are they coming from?

                  "}, {"location": "linux-tutorial/common_pitfalls/", "title": "Common Pitfalls", "text": "

                  This page highlights common pitfalls in Linux usage, offering insights into potential challenges users might face. By understanding these pitfalls, you can avoid unnecessary hurdles.

                  "}, {"location": "linux-tutorial/common_pitfalls/#location", "title": "Location", "text": "

                  If you receive an error message which contains something like the following:

                  No such file or directory\n

                  It probably means that you haven't placed your files in the correct directory, or you have mistyped the file name or path.

                  Try and figure out the correct location using ls, cd and using the different $VSC_* variables.

                  "}, {"location": "linux-tutorial/common_pitfalls/#spaces", "title": "Spaces", "text": "

                  Filenames should not contain any spaces! If you have a long filename you should use underscores or dashes (e.g., very_long_filename).

                  $ cat some file\nNo such file or directory 'some'\n

                  Spaces are permitted, however they result in surprising behaviour. To cat the file 'some file' as above, you can escape the space with a backslash (\"\\\") or you can put the filename in quotes:

                  $ cat some\\ file\n...\n$ cat \"some file\"\n...\n

                  This is especially error-prone if you are piping results of find:

                  $ find . -type f | xargs cat\nNo such file or directory name \u2019some\u2019\nNo such file or directory name \u2019file\u2019\n

                  This can be worked around using the -print0 flag:

                  $ find . -type f -print0 | xargs -0 cat\n...\n

                  But, this is tedious, and you can prevent errors by simply colouring within the lines and not using spaces in filenames.

                  "}, {"location": "linux-tutorial/common_pitfalls/#missingmistyped-environment-variables", "title": "Missing/mistyped environment variables", "text": "

                  If you use a command like rm -r with environment variables you need to be careful to make sure that the environment variable exists. If you mistype an environment variable then it will resolve into a blank string. This means the following resolves to rm -r ~/* which will remove every file in your home directory!

                  $ rm -r ~/$PROJETC/*\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#typing-dangerous-commands", "title": "Typing dangerous commands", "text": "

                  A good habit when typing dangerous commands is to precede the line with #, the comment character. This will let you type out the command without fear of accidentally hitting enter and running something unintended.

                  $ #rm -r ~/$POROJETC/*\n
                  Then you can go back to the beginning of the line (Ctrl-A) and remove the first character (Ctrl-D) to run the command. You can also just press enter to put the command in your history so you can come back to it later (e.g., while you go check the spelling of your environment variables).

                  "}, {"location": "linux-tutorial/common_pitfalls/#permissions", "title": "Permissions", "text": "
                  $ ls -l script.sh # File with correct permissions\n-rwxr-xr-x 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n$ ls -l script.sh # File with incorrect permissions\n-rw-r--r-- 1 vsc40000 vsc40000 2983 Jan 30 09:13 script.sh\n

                  Before submitting the script, you'll need to add execute permissions to make sure it can be executed:

                  $ chmod +x script_name.sh\n

                  "}, {"location": "linux-tutorial/common_pitfalls/#help", "title": "Help", "text": "

                  If you stumble upon an error, don't panic! Read the error output, it might contain a clue as to what went wrong. You can copy the error message into Google (selecting a small part of the error without filenames). It can help if you surround your search terms in double quotes (for example \"No such file or directory\"), that way Google will consider the error as one thing, and won't show results just containing these words in random order.

                  If you need help about a certain command, you should consult its so-called \"man page\":

                  $ man command\n

                  This will open the manual of this command. This manual contains detailed explanation of all the options the command has. Exiting the manual is done by pressing 'q'.

                  Don't be afraid to contact hpc@ugent.be. They are here to help and will do so for even the smallest of problems!

                  "}, {"location": "linux-tutorial/common_pitfalls/#more-information", "title": "More information", "text": "
                  1. Unix Power Tools - A fantastic book about most of these tools (see also The Second Edition)

                  2. http://linuxcommand.org/: A great place to start with many examples. There is an associated book which gets a lot of good reviews

                  3. The Linux Documentation Project: More guides on various topics relating to the Linux command line

                  4. basic shell usage

                  5. Bash for beginners

                  6. MOOC

                  Please don't hesitate to contact in case of questions or problems.

                  "}, {"location": "linux-tutorial/getting_started/", "title": "Getting Started", "text": ""}, {"location": "linux-tutorial/getting_started/#logging-in", "title": "Logging in", "text": "

                  To get started with the HPC-UGent infrastructure, you need to obtain a VSC account, see HPC manual. Keep in mind that you must keep your private key to yourself!

                  You can look at your public/private key pair as a lock and a key: you give us the lock (your public key), we put it on the door, and then you can use your key to open the door and get access to the HPC infrastructure. Anyone who has your key can use your VSC account!

                  Details on connecting to the HPC infrastructure are available in HPC manual connecting section.

                  "}, {"location": "linux-tutorial/getting_started/#getting-help", "title": "Getting help", "text": "

                  To get help:

                  1. use the documentation available on the system, through the help, info and man commands (use q to exit).
                    help cd \ninfo ls \nman cp \n
                  2. use Google

                  3. contact hpc@ugent.be in case of problems or questions (even for basic things!)

                  "}, {"location": "linux-tutorial/getting_started/#errors", "title": "Errors", "text": "

                  Sometimes when executing a command, an error occurs. Most likely there will be error output or a message explaining you this. Read this carefully and try to act on it. Try googling the error first to find any possible solution, but if you can't come up with something in 15 minutes, don't hesitate to mail hpc@ugent.be

                  "}, {"location": "linux-tutorial/getting_started/#basic-terminal-usage", "title": "Basic terminal usage", "text": "

                  The basic interface is the so-called shell prompt, typically ending with $ (for bash shells).

                  You use the shell by executing commands, and hitting <enter>. For example:

                  $ echo hello \nhello \n

                  You can go to the start or end of the command line using Ctrl-A or Ctrl-E.

                  To go through previous commands, use <up> and <down>, rather than retyping them.

                  "}, {"location": "linux-tutorial/getting_started/#command-history", "title": "Command history", "text": "

                  A powerful feature is that you can \"search\" through your command history, either using the history command, or using Ctrl-R:

                  $ history\n    1 echo hello\n\n# hit Ctrl-R, type 'echo' \n(reverse-i-search)`echo': echo hello\n

                  "}, {"location": "linux-tutorial/getting_started/#stopping-commands", "title": "Stopping commands", "text": "

                  If for any reason you want to stop a command from executing, press Ctrl-C. For example, if a command is taking too long, or you want to rerun it with different arguments.

                  "}, {"location": "linux-tutorial/getting_started/#variables", "title": "Variables", "text": "

                  At the prompt we also have access to shell variables, which have both a name and a value.

                  They can be thought of as placeholders for things we need to remember.

                  For example, to print the path to your home directory, we can use the shell variable named HOME:

                  $ echo $HOME \n/user/home/gent/vsc400/vsc40000\n

                  This prints the value of this variable.

                  "}, {"location": "linux-tutorial/getting_started/#defining-variables", "title": "Defining variables", "text": "

                  There are several variables already defined for you when you start your session, such as $HOME which contains the path to your home directory.

                  For a full overview of defined environment variables in your current session, you can use the env command. You can sort this output with sort to make it easier to search in:

                  $ env | sort \n...\nHOME=/user/home/gent/vsc400/vsc40000 \n... \n

                  You can also use the grep command to search for a piece of text. The following command will output all VSC-specific variable names and their values:

                  $ env | sort | grep VSC\n

                  But we can also define our own. this is done with the export command (note: variables are always all-caps as a convention):

                  $ export MYVARIABLE=\"value\"\n

                  It is important you don't include spaces around the = sign. Also note the lack of $ sign in front of the variable name.

                  If we then do

                  $ echo $MYVARIABLE\n

                  this will output value. Note that the quotes are not included, they were only used when defining the variable to escape potential spaces in the value.

                  "}, {"location": "linux-tutorial/getting_started/#changing-your-prompt-using-ps1", "title": "Changing your prompt using $PS1", "text": "

                  You can change what your prompt looks like by redefining the special-purpose variable $PS1.

                  For example: to include the current location in your prompt:

                  $ export PS1='\\w $'\n~ $ cd test \n~/test $ \n

                  Note that ~ is short representation of your home directory.

                  To make this persistent across session, you can define this custom value for $PS1 in your .profile startup script:

                  $ echo 'export PS1=\"\\w $ \" ' >> ~/.profile\n

                  "}, {"location": "linux-tutorial/getting_started/#using-non-defined-variables", "title": "Using non-defined variables", "text": "

                  One common pitfall is the (accidental) use of non-defined variables. Contrary to what you may expect, this does not result in error messages, but the variable is considered to be empty instead.

                  This may lead to surprising results, for example:

                  $ export WORKDIR=/tmp/test \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo $HOME\n/user/home/gent/vsc400/vsc40000 \n

                  To understand what's going on here, see the section on cd below.

                  The moral here is: be very careful to not use empty variables unintentionally.

                  Tip for job scripts: use set -e -u to avoid using empty variables accidentally.

                  The -e option will result in the script getting stopped if any command fails.

                  The -u option will result in the script getting stopped if empty variables are used. (see https://ss64.com/bash/set.html for a more detailed explanation and more options)

                  More information can be found at http://www.tldp.org/LDP/abs/html/variables.html.

                  "}, {"location": "linux-tutorial/getting_started/#restoring-your-default-environment", "title": "Restoring your default environment", "text": "

                  If you've made a mess of your environment, you shouldn't waste too much time trying to fix it. Just log out and log in again and you will be given a pristine environment.

                  "}, {"location": "linux-tutorial/getting_started/#basic-system-information", "title": "Basic system information", "text": "

                  Basic information about the system you are logged into can be obtained in a variety of ways.

                  We limit ourselves to determining the hostname:

                  $ hostname \ngligar01.gligar.os\n\n$ echo $HOSTNAME \ngligar01.gligar.os \n

                  And querying some basic information about the Linux kernel:

                  $ uname -a \nLinux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09\n    CET 2015 x86_64 x86_64 x86_64 GNU/Linux \n

                  "}, {"location": "linux-tutorial/getting_started/#exercises", "title": "Exercises", "text": "
                  • Print the full path to your home directory
                  • Determine the name of the environment variable to your personal scratch directory
                  • What's the name of the system you\\'re logged into? Is it the same for everyone?
                  • Figure out how to print the value of a variable without including a newline
                  • How do you get help on using the man command?

                  Next chapter teaches you on how to navigate.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/", "title": "More on the HPC infrastructure", "text": ""}, {"location": "linux-tutorial/hpc_infrastructure/#filesystems", "title": "Filesystems", "text": "

                  Multiple different shared filesystems are available on the HPC infrastructure, each with their own purpose. See section Where to store your data on the HPC for a list of available locations.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#vo-storage", "title": "VO storage", "text": "

                  If you are a member of a (non-default) virtual organisation (VO), see section Virtual Organisations, you have access to additional directories (with more quota) on the data and scratch filesystems, which you can share with other members in the VO.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#quota", "title": "Quota", "text": "

                  Space is limited on the cluster's storage. To check your quota, see section Pre-defined quota.

                  To figure out where your quota is being spent, the du (isk sage) command can come in useful:

                  $ du -sh test\n59M test\n

                  Do not (frequently) run du on directories where large amounts of data are stored, since that will:

                  1. take a long time

                  2. result in increased load on the shared storage since (the metadata of) every file in those directories will have to be inspected.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#modules", "title": "Modules", "text": "

                  Software is provided through so-called environment modules.

                  The most commonly used commands are:

                  1. module avail: show all available modules

                  2. module avail <software name>: show available modules for a specific software name

                  3. module list: show list of loaded modules

                  4. module load <module name>: load a particular module

                  More information is available in section Modules.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#using-the-clusters", "title": "Using the clusters", "text": "

                  The use the clusters beyond the login node(s) which have limited resources, you should create job scripts and submit them to the clusters.

                  Detailed information is available in section submitting your job.

                  "}, {"location": "linux-tutorial/hpc_infrastructure/#exercises", "title": "Exercises", "text": "

                  Create and submit a job script that computes the sum of 1-100 using Python, and prints the numbers to a unique output file in $VSC_SCRATCH.

                  Hint: python -c \"print(sum(range(1, 101)))\"

                  • How many modules are available for Python version 3.6.4?
                  • How many modules get loaded when you load the Python/3.6.4-intel-2018a module?
                  • Which cluster modules are available?

                  • What's the full path to your personal home/data/scratch directories?

                  • Determine how large your personal directories are.
                  • What's the difference between the size reported by du -sh $HOME and by ls -ld $HOME?
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/", "title": "Manipulating files and directories", "text": "

                  Being able to manage your data is an important part of using the HPC infrastructure. The bread and butter commands for doing this are mentioned here. It might seem annoyingly terse at first, but with practice you will realise that it's very practical to have such common commands short to type.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#file-contents-cat-head-tail-less-more", "title": "File contents: \"cat\", \"head\", \"tail\", \"less\", \"more\"", "text": "

                  To print the contents of an entire file, you can use cat; to only see the first or last N lines, you can use head or tail:

                  $ cat one.txt\n1\n2\n3\n4\n5\n\n$ head -2 one.txt\n1\n2\n\n$ tail -2 one.txt\n4\n5\n

                  To check the contents of long text files, you can use the less or more commands which support scrolling with \"<up>\", \"<down>\", \"<space>\", etc.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#copying-files-cp", "title": "Copying files: \"cp\"", "text": "
                  $ cp source target\n

                  This is the cp command, which copies a file from source to target. To copy a directory, we use the -r option:

                  $ cp -r sourceDirectory target\n

                  A last more complicated example:

                  $ cp -a sourceDirectory target\n

                  Here we used the same cp command, but instead we gave it the -a option which tells cp to copy all the files and keep timestamps and permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#creating-directories-mkdir", "title": "Creating directories: \"mkdir\"", "text": "
                  $ mkdir directory\n

                  which will create a directory with the given name inside the current directory.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#renamingmoving-files-mv", "title": "Renaming/moving files: \"mv\"", "text": "
                  $ mv source target\n

                  mv will move the source path to the destination path. Works for both directories as files.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-files-rm", "title": "Removing files: \"rm\"", "text": "

                  Note: there are NO backups, there is no 'trash bin'. If you remove files/directories, they are gone.

                  $ rm filename\n
                  rm will remove a file or directory. (rm -rf directory will remove every file inside a given directory). WARNING: files removed will be lost forever, there are no backups, so beware when using this command!

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#removing-a-directory-rmdir", "title": "Removing a directory: \"rmdir\"", "text": "

                  You can remove directories using rm -r directory, however, this is error-prone and can ruin your day if you make a mistake in typing. To prevent this type of error, you can remove the contents of a directory using rm and then finally removing the directory with:

                  $ rmdir directory\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#changing-permissions-chmod", "title": "Changing permissions: \"chmod\"", "text": "

                  Every file, directory, and link has a set of permissions. These permissions consist of permission groups and permission types. The permission groups are:

                  1. User - a particular user (account)

                  2. Group - a particular group of users (may be user-specific group with only one member)

                  3. Other - other users in the system

                  The permission types are:

                  1. Read - For files, this gives permission to read the contents of a file

                  2. Write - For files, this gives permission to write data to the file. For directories, it allows users to add or remove files to a directory.

                  3. Execute - For files this gives permission to execute a file as through it were a script. For directories, it allows users to open the directory and look at the contents.

                  Any time you run ls -l you'll see a familiar line of -rwx------ or similar combination of the letters r, w, x and - (dashes). These are the permissions for the file or directory. (See also the previous section on permissions)

                  $ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  Here, we see that articleTable.csv is a file (beginning the line with -) has read and write permission for the user vsc40000 (rw-), and read permission for the group mygroup as well as all other users (r-- and r--).

                  The next entry is Project_GoldenDragon. We see it is a directory because the line begins with a d. It also has read, write, and execute permission for the vsc40000 user (rwx). So that user can look into the directory and add or remove files. Users in the mygroup can also look into the directory and read the files. But they can't add or remove files (r-x). Finally, other users can read files in the directory, but other users have no permissions to look in the directory at all (---).

                  Maybe we have a colleague who wants to be able to add files to the directory. We use chmod to change the modifiers to the directory to let people in the group write to the directory:

                  $ chmod g+w Project_GoldenDragon\n$ ls -l\ntotal 1\n-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv\ndrwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  The syntax used here is g+x which means group was given write permission. To revoke it again, we use g-w. The other roles are u for user and o for other.

                  You can put multiple changes on the same line: chmod o-rwx,g-rxw,u+rx,u-w somefile will take everyone's permission away except the user's ability to read or execute the file.

                  You can also use the -R flag to affect all the files within a directory, but this is dangerous. It's best to refine your search using find and then pass the resulting list to chmod since it's not usual for all files in a directory structure to have the same permissions.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#access-control-lists-acls", "title": "Access control lists (ACLs)", "text": "

                  However, this means that all users in mygroup can add or remove files. This could be problematic if you only wanted one person to be allowed to help you administer the files in the project. We need a new group. To do this in the HPC environment, we need to use access control lists (ACLs):

                  $ setfacl -m u:otheruser:w Project_GoldenDragon\n$ ls -l Project_GoldenDragon\ndrwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon\n

                  This will give the user otheruser permissions to write to Project_GoldenDragon

                  Now there is a + at the end of the line. This means there is an ACL attached to the directory. getfacl Project_GoldenDragon will print the ACLs for the directory.

                  Note: most people don't use ACLs, but it's sometimes the right thing and you should be aware it exists.

                  See https://linux.die.net/man/1/setfacl for more information.

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zipping-gzipgunzip-zipunzip", "title": "Zipping: \"gzip\"/\"gunzip\", \"zip\"/\"unzip\"", "text": "

                  Files should usually be stored in a compressed file if they're not being used frequently. This means they will use less space and thus you get more out of your quota. Some types of files (e.g., CSV files with a lot of numbers) compress as much as 9:1. The most commonly used compression format on Linux is gzip. To compress a file using gzip, we use:

                  $ ls -lh myfile\n-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile\n$ gzip myfile\n$ ls -lh myfile.gz\n-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz\n

                  Note: if you zip a file, the original file will be removed. If you unzip a file, the compressed file will be removed. To keep both, we send the data to stdout and redirect it to the target file:

                  $ gzip -c myfile > myfile.gz\n$ gunzip -c myfile.gz > myfile\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#zip-and-unzip", "title": "\"zip\" and \"unzip\"", "text": "

                  Windows and macOS seem to favour the zip file format, so it's also important to know how to unpack those. We do this using unzip:

                  $ unzip myfile.zip\n

                  If we would like to make our own zip archive, we use zip:

                  $ zip myfiles.zip myfile1 myfile2 myfile3\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#working-with-tarballs-tar", "title": "Working with tarballs: \"tar\"", "text": "

                  Tar stands for \"tape archive\" and is a way to bundle files together in a bigger file.

                  You will normally want to unpack these files more often than you make them. To unpack a .tar file you use:

                  $ tar -xf tarfile.tar\n

                  Often, you will find gzip compressed .tar files on the web. These are called tarballs. You can recognize them by the filename ending in .tar.gz. You can uncompress these using gunzip and then unpacking them using tar. But tar knows how to open them using the -z option:

                  $ tar -zxf tarfile.tar.gz\n$ tar -zxf tarfile.tgz\n

                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#order-of-arguments", "title": "Order of arguments", "text": "

                  Note: Archive programs like zip, tar, and jar use arguments in the \"opposite direction\" of copy commands.

                  # cp, ln: &lt;source(s)&gt; &lt;target&gt;\n$ cp source1 source2 source3 target\n$ ln -s source target\n\n# zip, tar: &lt;target&gt; &lt;source(s)&gt;\n$ zip zipfile.zip source1 source2 source3\n$ tar -cf tarfile.tar source1 source2 source3\n

                  If you use tar with the source files first then the first file will be overwritten. You can control the order of arguments of tar if it helps you remember:

                  $ tar -c source1 source2 source3 -f tarfile.tar\n
                  "}, {"location": "linux-tutorial/manipulating_files_and_directories/#exercises", "title": "Exercises", "text": "
                  1. Create a subdirectory in your home directory named test containing a single, empty file named one.txt.

                  2. Copy /etc/hostname into the test directory and then check what's in it. Rename the file to hostname.txt.

                  3. Make a new directory named another and copy the entire test directory to it. another/test/one.txt should then be an empty file.

                  4. Remove the another/test directory with a single command.

                  5. Rename test to test2. Move test2/hostname.txt to your home directory.

                  6. Change the permission of test2 so only you can access it.

                  7. Create an empty job script named job.sh, and make it executable.

                  8. gzip hostname.txt, see how much smaller it becomes, then unzip it again.

                  The next chapter is on uploading files, especially important when using HPC-infrastructure.

                  "}, {"location": "linux-tutorial/navigating/", "title": "Navigating", "text": "

                  This chapter serves as a guide to navigating within a Linux shell, giving users essential techniques to traverse directories. A very important skill.

                  "}, {"location": "linux-tutorial/navigating/#current-directory-pwd-and-pwd", "title": "Current directory: \"pwd\" and \"$PWD\"", "text": "

                  To print the current directory, use pwd or \\$PWD:

                  $ cd $HOME \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n$ echo \"The current directory is: $PWD\" \nThe current directory is: /user/home/gent/vsc400/vsc40000\n

                  "}, {"location": "linux-tutorial/navigating/#listing-files-and-directories-ls", "title": "Listing files and directories: \"ls\"", "text": "

                  A very basic and commonly used command is ls, which can be used to list files and directories.

                  In its basic usage, it just prints the names of files and directories in the current directory. For example:

                  $ ls\nafile.txt some_directory \n

                  When provided an argument, it can be used to list the contents of a directory:

                  $ ls some_directory \none.txt two.txt\n

                  A couple of commonly used options include:

                  • detailed listing using ls -l:

                    $ ls -l\n    total 4224 \n    -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • To print the size information in human-readable form, use the -h flag:

                    $ ls -lh\n    total 4.1M \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt\n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • also listing hidden files using the -a flag:

                    $ ls -lah\n    total 3.9M \n    drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .\n    drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 .. \n    -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt \n    -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory\n
                  • ordering files by the most recent change using -rt:

                    $ ls -lrth\n    total 4.0M \n    drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory \n    -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt\n

                  If you try to use ls on a file that doesn't exist, you will get a clear error message:

                  $ ls nosuchfile \nls: cannot access nosuchfile: No such file or directory\n
                  "}, {"location": "linux-tutorial/navigating/#changing-directory-cd", "title": "Changing directory: \"cd\"", "text": "

                  To change to a different directory, you can use the cd command:

                  $ cd some_directory\n

                  To change back to the previous directory you were in, there's a shortcut: cd -

                  Using cd without an argument results in returning back to your home directory:

                  $ cd \n$ pwd\n/user/home/gent/vsc400/vsc40000 \n

                  "}, {"location": "linux-tutorial/navigating/#inspecting-file-type-file", "title": "Inspecting file type: \"file\"", "text": "

                  The file command can be used to inspect what type of file you're dealing with:

                  $ file afile.txt\nafile.txt: ASCII text\n\n$ file some_directory \nsome_directory: directory\n
                  "}, {"location": "linux-tutorial/navigating/#absolute-vs-relative-file-paths", "title": "Absolute vs relative file paths", "text": "

                  An absolute filepath starts with / (or a variable which value starts with /), which is also called the root of the filesystem.

                  Example: absolute path to your home directory: /user/home/gent/vsc400/vsc40000.

                  A relative path starts from the current directory, and points to another location up or down the filesystem hierarchy.

                  Example: some_directory/one.txt points to the file one.txt that is located in the subdirectory named some_directory of the current directory.

                  There are two special relative paths worth mentioning:

                  • . is a shorthand for the current directory
                  • .. is a shorthand for the parent of the current directory

                  You can also use .. when constructing relative paths, for example:

                  $ cd $HOME/some_directory \n$ ls ../afile.txt \n../afile.txt \n
                  "}, {"location": "linux-tutorial/navigating/#permissions", "title": "Permissions", "text": "

                  Each file and directory has particular permissions set on it, which can be queried using ls -l.

                  For example:

                  $ ls -l afile.txt \n-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt \n

                  The -rwxrw-r-- specifies both the type of file (- for files, d for directories (see first character)), and the permissions for user/group/others:

                  1. each triple of characters indicates whether the read (r), write (w), execute (x) permission bits are set or not
                  2. the 1st part rwx indicates that the owner \"vsc40000\" of the file has all the rights
                  3. the 2nd part rw- indicates the members of the group \"agroup\" only have read/write permissions (not execute)
                  4. the 3rd part r-- indicates that other users only have read permissions

                  The default permission settings for new files/directories are determined by the so-called umask setting, and are by default:

                  1. read-write permission on files for user/group (no execute), read-only for others (no write/execute)
                  2. read-write-execute permission for directories on user/group, read/execute-only for others (no write)

                  See also the chmod command later in this manual.

                  "}, {"location": "linux-tutorial/navigating/#finding-filesdirectories-find", "title": "Finding files/directories: \"find\"", "text": "

                  find will crawl a series of directories and lists files matching given criteria.

                  For example, to look for the file named one.txt:

                  $ cd $HOME \n$ find . -name one.txt\n./some_directory/one.txt \n

                  To look for files using incomplete names, you can use a wildcard *; note that you need to escape the * to avoid that Bash expands it into afile.txt by adding double quotes:

                  $ find . -name \"*.txt\"\n./.hidden_file.txt \n./afile.txt \n./some_directory/one.txt\n./some_directory/two.txt \n

                  A more advanced use of the find command is to use the -exec flag to perform actions on the found file(s), rather than just printing their paths (see man find).

                  "}, {"location": "linux-tutorial/navigating/#exercises", "title": "Exercises", "text": "
                  • Go to /tmp, then back to your home directory. How many different ways to do this can you come up with?
                  • When was your home directory created or last changed?
                  • Determine the name of the last changed file in /tmp.
                  • See how home directories are organised. Can you access the home directory of other users?

                  The next chapter will teach you how to interact with files and directories.

                  "}, {"location": "linux-tutorial/uploading_files/", "title": "Uploading/downloading/editing files", "text": ""}, {"location": "linux-tutorial/uploading_files/#uploadingdownloading-files", "title": "Uploading/downloading files", "text": "

                  To transfer files from and to the HPC, see the section about transferring files of the HPC manual

                  "}, {"location": "linux-tutorial/uploading_files/#dos2unix", "title": "dos2unix", "text": "

                  After uploading files from Windows, you may experience some problems due to the difference in line endings between Windows (carriage return + line feed) and Linux (line feed only), see also https://kuantingchen04.github.io/line-endings/.

                  For example, you may see an error when submitting a job script that was edited on Windows:

                  sbatch: error: Batch script contains DOS line breaks (\\r\\n)\nsbatch: error: instead of expected UNIX line breaks (\\n).\n

                  To fix this problem, you should run the dos2unix command on the file:

                  $ dos2unix filename\n
                  "}, {"location": "linux-tutorial/uploading_files/#symlinks-for-datascratch", "title": "Symlinks for data/scratch", "text": "

                  As we end up in the home directory when connecting, it would be convenient if we could access our data and VO storage. To facilitate this we will create symlinks to them in our home directory. This will create 4 symbolic links (they're like \"shortcuts\" on your desktop) pointing to the respective storages:

                  $ cd $HOME\n$ ln -s $VSC_SCRATCH scratch\n$ ln -s $VSC_DATA data\n$ ls -l scratch data\nlrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->\n    /user/data/gent/vsc400/vsc40000\nlrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->\n    /user/scratch/gent/vsc400/vsc40000\n
                  "}, {"location": "linux-tutorial/uploading_files/#editing-with-nano", "title": "Editing with nano", "text": "

                  Nano is the simplest editor available on Linux. To open Nano, just type nano. To edit a file, you use nano the_file_to_edit.txt. You will be presented with the contents of the file and a menu at the bottom with commands like ^O Write Out The ^ is the Control key. So ^O means Ctrl-O. The main commands are:

                  1. Open (\"Read\"): ^R

                  2. Save (\"Write Out\"): ^O

                  3. Exit: ^X

                  More advanced editors (beyond the scope of this page) are vim and emacs. A simple tutorial on how to get started with vim can be found at https://www.openvim.com/.

                  "}, {"location": "linux-tutorial/uploading_files/#copying-faster-with-rsync", "title": "Copying faster with rsync", "text": "

                  rsync is a fast and versatile copying tool. It can be much faster than scp when copying large datasets. It's famous for its \"delta-transfer algorithm\", which reduces the amount of data sent over the network by only sending the differences between files.

                  You will need to run rsync from a computer where it is installed. Installing rsync is the easiest on Linux: it comes pre-installed with a lot of distributions.

                  For example, to copy a folder with lots of CSV files:

                  $ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/\n

                  will copy the folder testfolder and its contents to $VSC_DATA on the , assuming the data symlink is present in your home directory, see symlinks section.

                  The -r flag means \"recursively\", the -z flag means that compression is enabled (this is especially handy when dealing with CSV files because they compress well) and the -v enables more verbosity (more details about what's going on).

                  To copy large files using rsync, you can use the -P flag: it enables both showing of progress and resuming partially downloaded files.

                  To copy files to your local computer, you can also use rsync:

                  $ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder\n
                  This will copy the folder bioset and its contents on $VSC_DATA to a local folder named local_folder.

                  See man rsync or https://linux.die.net/man/1/rsync for more information about rsync.

                  "}, {"location": "linux-tutorial/uploading_files/#exercises", "title": "Exercises", "text": "
                  1. Download the file /etc/hostname to your local computer.

                  2. Upload a file to a subdirectory of your personal $VSC_DATA space.

                  3. Create a file named hello.txt and edit it using nano.

                  Now you have a basic understanding, see next chapter for some more in depth concepts.

                  "}, {"location": "2023/donphan-gallade/", "title": "New Tier-2 clusters: donphan and gallade", "text": "

                  In April 2023, two new clusters were added to the HPC-UGent Tier-2 infrastructure: donphan and gallade.

                  This page provides some important information regarding these clusters, and how they differ from the clusters they are replacing (slaking and kirlia, respectively).

                  If you have any questions on using donphan or gallade, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/donphan-gallade/#donphan-debuginteractive-cluster", "title": "donphan: debug/interactive cluster", "text": "

                  donphan is the new debug/interactive cluster.

                  It replaces slaking, which will be retired on Monday 22 May 2023.

                  It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the HPC-UGent web portal, etc.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 18-core Intel Xeon Gold 6240 (Cascade Lake @ 2.6 GHz) processor;
                  • one shared NVIDIA Ampere A2 GPU (16GB GPU memory)
                  • ~738 GiB of RAM memory;
                  • 1.6TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/donphan\n

                  You can also start (interactive) sessions on donphan using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-slaking", "title": "Differences compared to slaking", "text": ""}, {"location": "2023/donphan-gallade/#cpus", "title": "CPUs", "text": "

                  The most important difference between donphan and slaking workernodes is in the CPUs: while slaking workernodes featured Intel Haswell CPUs, which support SSE*, AVX, and AVX2 vector instructions, donphan features Intel Cascade Lake CPUs, which also support AVX-512 instructions, on top of SSE*, AVX, and AVX2.

                  Although software that was built on a slaking workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) should still run on a donphan workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions.

                  "}, {"location": "2023/donphan-gallade/#cluster-size", "title": "Cluster size", "text": "

                  The donphan cluster is significantly bigger than slaking, both in terms of number of workernodes and number of cores per workernode, and hence the potential performance impact of oversubscribed cores (see below) is less likely to occur in practice.

                  "}, {"location": "2023/donphan-gallade/#user-limits-and-oversubscription-on-donphan", "title": "User limits and oversubscription on donphan", "text": "

                  By imposing strict user limits and using oversubscription on this cluster, we ensure that anyone can get a job running without having to wait in the queue, albeit with limited resources.

                  The user limits for donphan include: * max. 5 jobs in queue; * max. 3 jobs running; * max. of 8 cores in total for running jobs; * max. 27GB of memory in total for running jobs;

                  The job scheduler is configured with to allow oversubscription of the available cores, which means that jobs will continue to start even if all cores are already occupied by running jobs. While this prevents waiting time in the queue, it does imply that performance will degrade when all cores are occupied and additional jobs continue to start running.

                  "}, {"location": "2023/donphan-gallade/#shared-gpu-on-donphan-workernodes", "title": "Shared GPU on donphan workernodes", "text": "

                  Each donphan workernode includes a single NVIDIA A2 GPU that can be used for light compute workloads, and to accelerate certain graphical tasks.

                  This GPU is shared across all jobs running on the workernode, and does not need to be requested explicitly (it is always available, similar to the local disk of the workernode).

                  Warning

                  Due to the shared nature of this GPU, you should assume that any data that is loaded in the GPU memory could potentially be accessed by other users, even after your processes have completed.

                  There are no strong security guarantees regarding data protection when using this shared GPU!

                  "}, {"location": "2023/donphan-gallade/#gallade-large-memory-cluster", "title": "gallade: large-memory cluster", "text": "

                  gallade is the new large-memory cluster.

                  It replaces kirlia, which will be retired on Monday 22 May 2023.

                  This cluster consists of 12 workernodes, each with:

                  • 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) processor;
                  • ~940 GiB of RAM memory;
                  • 1.5TB NVME local disk;
                  • HDR-100 InfiniBand interconnect;
                  • RHEL8 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/gallade\n

                  You can also start (interactive) sessions on gallade using the HPC-UGent web portal.

                  "}, {"location": "2023/donphan-gallade/#differences-compared-to-kirlia", "title": "Differences compared to kirlia", "text": ""}, {"location": "2023/donphan-gallade/#cpus_1", "title": "CPUs", "text": "

                  The most important difference between gallade and kirlia workernodes is in the CPUs: while kirlia workernodes featured Intel Cascade Lake CPUs, which support vector AVX-512 instructions (next to SSE*, AVX, and AVX2), gallade features AMD Milan-X CPUs, which implement the Zen3 microarchitecture and hence do not support AVX-512 instructions (but do support SSE*, AVX, and AVX2).

                  As a result, software that was built on a kirlia workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) may not work anymore on a gallade workernode, and will produce Illegal instruction errors.

                  Therefore, you may need to recompile software in order to use it on gallade. Even if software built on kirlia does still run on gallade, it is strongly recommended to recompile it anyway, since there may be signficant peformance benefits.

                  "}, {"location": "2023/donphan-gallade/#memory-per-core", "title": "Memory per core", "text": "

                  Although gallade workernodes have signficantly more RAM memory (~940 GiB) than kirlia workernodes had (~738 GiB), the average amount of memory per core is significantly lower on gallade than it was on kirlia, because a gallade workernode has 128 cores (so ~7.3 GiB per core on average), while a kirlia workernode had only 36 cores (so ~20.5 GiB per core on average).

                  It is important to take this aspect into account when submitting jobs to gallade, especially when requesting all cores via ppn=all. You may need to explictly request more memory (see also here).

                  "}, {"location": "2023/shinx/", "title": "New Tier-2 cluster: shinx", "text": "

                  In October 2023, a new pilot cluster was added to the HPC-UGent Tier-2 infrastructure: shinx.

                  This page provides some important information regarding this cluster, and how it differs from the clusters it is replacing (swalot and victini).

                  If you have any questions on using shinx, you can contact the HPC-UGent team.

                  For software installation requests, please use the request form.

                  "}, {"location": "2023/shinx/#shinx-generic-cpu-cluster", "title": "shinx: generic CPU cluster", "text": "

                  shinx is a new CPU-only cluster.

                  It replaces swalot, which was retired on Wednesday 01 November 2023, and victini, which ws retired on Monday 05 February 2024.

                  It is primarily for regular CPU compute use.

                  This cluster consists of 48 workernodes, each with:

                  • 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) processor;
                  • ~360 GiB of RAM memory;
                  • 400GB local disk;
                  • NDR-200 InfiniBand interconnect;
                  • RHEL9 as operating system;

                  To start using this cluster from a terminal session, first run:

                  module swap cluster/shinx\n

                  You can also start (interactive) sessions on shinx using the HPC-UGent web portal.

                  "}, {"location": "2023/shinx/#differences-compared-to-swalot-and-victini", "title": "Differences compared to swalot and victini.", "text": ""}, {"location": "2023/shinx/#cpus", "title": "CPUs", "text": "

                  The most important difference between shinx and swalot/victini workernodes is in the CPUs: while swalot and victini workernodes featured Intel CPUs, shinx workernodes have AMD Genoa CPUs.

                  Although software that was built on a swalot or victini workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing on swalot).

                  "}, {"location": "2023/shinx/#cluster-size", "title": "Cluster size", "text": "

                  The shinx cluster is significantly bigger than swalot and victini in number of cores, and number of cores per workernode, but not in number of workernodes. In particular, requesting all cores via ppn=all might be something to reconsider.

                  The amount of available memory per core is 1.9 GiB, which is lower then the swalot nodes which had 6.2 GiB per core and the victini nodes that had 2.5 GiB per core.

                  "}, {"location": "2023/shinx/#comparison-with-doduo", "title": "Comparison with doduo", "text": "

                  As doduo is the current largest CPU cluster of the UGent Tier-2 infrastructure, and it is also based on AMD EPYC CPUs, we would like to point out that, roughly speaking, one shinx node is equal to 2 doduo nodes.

                  Although software that was built on a doduo workernode with compiler options that enable architecture-specific optimizations (like GCC's -march=native, or Intel compiler's -xHost) might still run on a shinx workernode, it is recommended to recompile the software to benefit from the support for AVX-512 vector instructions (which is missing from doduo).

                  "}, {"location": "2023/shinx/#other-remarks", "title": "Other remarks", "text": "
                  • Possible issues with thread pinning: we have seen, especially on Tier-1 dodrio cluster, that in certain cases thread pinning is invoked where it is not expected. Typical symptom is that all the processes that are started are pinned to a single core. Always report this issue when it occurs. You can try yourself to mitigate this by setting export OMP_PROC_BIND=false, but always report it so we can keep track of this problem. It is not recommended to always set this workaround, only for the specific tools that are affected.
                  "}, {"location": "2023/shinx/#shinx-pilot-phase-23102023-15072024", "title": "Shinx pilot phase (23/10/2023-15/07/2024)", "text": "

                  As usual with any pilot phase, you need to be member of the gpilot group, and to start using this cluster run:

                  module swap cluster/.shinx\n

                  Because the delivery time of the infiniband network is very high, we only expect to have all material end of February 2024. However, all the workernodes will already be delivered in the week of 20 October 2023

                  As such, we will have an extended pilot phase in 3 stages:

                  "}, {"location": "2023/shinx/#stage-0-23102023-17112023", "title": "Stage 0: 23/10/2023-17/11/2023", "text": "
                  • Minimal cluster to test software and nodes

                    • Only 2 or 3 nodes available
                    • FDR or EDR infiniband network
                    • EL8 OS
                  • Retirement of swalot cluster (as of 01 November 2023)

                  • Racking of stage 1 nodes
                  "}, {"location": "2023/shinx/#stage-1-01122023-01032024", "title": "Stage 1: 01/12/2023-01/03/2024", "text": "
                  • 2/3 cluster size

                    • 32 nodes (with max job size of 16 nodes)
                    • EDR Infiniband
                    • EL8 OS
                  • Retirement of victini (as of 05 February 2023)

                  • Racking of last 16 nodes
                  • Installation of NDR/NDR-200 infiniband network
                  "}, {"location": "2023/shinx/#stage-2-19042024-15072024", "title": "Stage 2 (19/04/2024-15/07/2024)", "text": "
                  • Full size cluster

                    • 48 nodes (no job size limit)
                    • NDR-200 Infiniband (single switch Infiniband topology)
                    • EL9 OS
                  • We expect to plan a full Tier-2 downtime in May 2024 to cleanup, refactor and renew the core networks (ethernet and infiniband) and some core services. It makes no sense to put shinx in production before that period, and the testing of the EL9 operating system will also take some time.

                  "}, {"location": "2023/shinx/#stage-3-15072024-", "title": "Stage 3 (15/07/2024 - )", "text": "
                  • Cluster in production using EL9 (starting with 9.4). Any user can now submit jobs.
                  "}, {"location": "2023/shinx/#using-doduo-software", "title": "Using doduo software", "text": "

                  For benchmarking and/or compatibility testing, you can use try to use doduo software stack by adding the following line in the job script before the actual software is loaded:

                  module swap env/software/doduo\n

                  We mainly expect problems with this in stage 2 of the pilot phase (and in later production phase), due to the change in OS.

                  "}, {"location": "available_software/", "title": "Available software (via modules)", "text": "

                  This table gives an overview of all the available software on the different clusters.

                  "}, {"location": "available_software/detail/ABAQUS/", "title": "ABAQUS", "text": ""}, {"location": "available_software/detail/ABAQUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABAQUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABAQUS, load one of these modules using a module load command like:

                  module load ABAQUS/2023\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABAQUS/2023 x x x x x x ABAQUS/2022-hotfix-2214 - x x - x x ABAQUS/2022 - x x - x x ABAQUS/2021-hotfix-2132 - x x - x x"}, {"location": "available_software/detail/ABINIT/", "title": "ABINIT", "text": ""}, {"location": "available_software/detail/ABINIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABINIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABINIT, load one of these modules using a module load command like:

                  module load ABINIT/9.10.3-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABINIT/9.10.3-intel-2022a - - x - x x ABINIT/9.4.1-intel-2020b - x x x x x ABINIT/9.2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/ABRA2/", "title": "ABRA2", "text": ""}, {"location": "available_software/detail/ABRA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRA2, load one of these modules using a module load command like:

                  module load ABRA2/2.23-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRA2/2.23-GCC-10.2.0 - x x x x x ABRA2/2.23-GCC-9.3.0 - x x - x x ABRA2/2.22-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/ABRicate/", "title": "ABRicate", "text": ""}, {"location": "available_software/detail/ABRicate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABRicate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABRicate, load one of these modules using a module load command like:

                  module load ABRicate/0.9.9-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABRicate/0.9.9-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ABySS/", "title": "ABySS", "text": ""}, {"location": "available_software/detail/ABySS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ABySS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ABySS, load one of these modules using a module load command like:

                  module load ABySS/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ABySS/2.3.7-foss-2023a x x x x x x ABySS/2.1.5-foss-2019b - x x - x x"}, {"location": "available_software/detail/ACTC/", "title": "ACTC", "text": ""}, {"location": "available_software/detail/ACTC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ACTC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ACTC, load one of these modules using a module load command like:

                  module load ACTC/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ACTC/1.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ADMIXTURE/", "title": "ADMIXTURE", "text": ""}, {"location": "available_software/detail/ADMIXTURE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ADMIXTURE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ADMIXTURE, load one of these modules using a module load command like:

                  module load ADMIXTURE/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ADMIXTURE/1.3.0 - x x - x x"}, {"location": "available_software/detail/AICSImageIO/", "title": "AICSImageIO", "text": ""}, {"location": "available_software/detail/AICSImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AICSImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AICSImageIO, load one of these modules using a module load command like:

                  module load AICSImageIO/4.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AICSImageIO/4.14.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/AMAPVox/", "title": "AMAPVox", "text": ""}, {"location": "available_software/detail/AMAPVox/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMAPVox installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMAPVox, load one of these modules using a module load command like:

                  module load AMAPVox/1.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMAPVox/1.9.4-Java-11 x x x - x x"}, {"location": "available_software/detail/AMICA/", "title": "AMICA", "text": ""}, {"location": "available_software/detail/AMICA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMICA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMICA, load one of these modules using a module load command like:

                  module load AMICA/2024.1.19-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMICA/2024.1.19-intel-2023a x x x x x x"}, {"location": "available_software/detail/AMOS/", "title": "AMOS", "text": ""}, {"location": "available_software/detail/AMOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMOS, load one of these modules using a module load command like:

                  module load AMOS/3.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMOS/3.1.0-foss-2023a x x x x x x AMOS/3.1.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/AMPtk/", "title": "AMPtk", "text": ""}, {"location": "available_software/detail/AMPtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AMPtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AMPtk, load one of these modules using a module load command like:

                  module load AMPtk/1.5.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AMPtk/1.5.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/ANTLR/", "title": "ANTLR", "text": ""}, {"location": "available_software/detail/ANTLR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTLR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTLR, load one of these modules using a module load command like:

                  module load ANTLR/2.7.7-GCCcore-10.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTLR/2.7.7-GCCcore-10.3.0-Java-11 - x x - x x ANTLR/2.7.7-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ANTs/", "title": "ANTs", "text": ""}, {"location": "available_software/detail/ANTs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ANTs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ANTs, load one of these modules using a module load command like:

                  module load ANTs/2.3.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ANTs/2.3.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/APR-util/", "title": "APR-util", "text": ""}, {"location": "available_software/detail/APR-util/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR-util installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR-util, load one of these modules using a module load command like:

                  module load APR-util/1.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR-util/1.6.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/APR/", "title": "APR", "text": ""}, {"location": "available_software/detail/APR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which APR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using APR, load one of these modules using a module load command like:

                  module load APR/1.7.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty APR/1.7.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ARAGORN/", "title": "ARAGORN", "text": ""}, {"location": "available_software/detail/ARAGORN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ARAGORN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ARAGORN, load one of these modules using a module load command like:

                  module load ARAGORN/1.2.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ARAGORN/1.2.41-foss-2021b x x x - x x ARAGORN/1.2.38-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/ASCAT/", "title": "ASCAT", "text": ""}, {"location": "available_software/detail/ASCAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASCAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASCAT, load one of these modules using a module load command like:

                  module load ASCAT/3.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASCAT/3.1.2-foss-2022b-R-4.2.2 x x x x x x ASCAT/3.1.2-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ASE/", "title": "ASE", "text": ""}, {"location": "available_software/detail/ASE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ASE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ASE, load one of these modules using a module load command like:

                  module load ASE/3.22.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ASE/3.22.1-intel-2022a x x x x x x ASE/3.22.1-intel-2021b x x x - x x ASE/3.22.1-gomkl-2021a x x x x x x ASE/3.22.1-foss-2022a x x x x x x ASE/3.22.1-foss-2021b x x x - x x ASE/3.21.1-fosscuda-2020b - - - - x - ASE/3.21.1-foss-2020b - - x x x - ASE/3.20.1-intel-2020a-Python-3.8.2 x x x x x x ASE/3.20.1-fosscuda-2020b - - - - x - ASE/3.20.1-foss-2020b - x x x x x ASE/3.19.0-intel-2019b-Python-3.7.4 - x x - x x ASE/3.19.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ATK/", "title": "ATK", "text": ""}, {"location": "available_software/detail/ATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ATK, load one of these modules using a module load command like:

                  module load ATK/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ATK/2.38.0-GCCcore-12.3.0 x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x ATK/2.38.0-GCCcore-11.3.0 x x x x x x ATK/2.36.0-GCCcore-11.2.0 x x x x x x ATK/2.36.0-GCCcore-10.3.0 x x x - x x ATK/2.36.0-GCCcore-10.2.0 x x x x x x ATK/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/AUGUSTUS/", "title": "AUGUSTUS", "text": ""}, {"location": "available_software/detail/AUGUSTUS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AUGUSTUS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AUGUSTUS, load one of these modules using a module load command like:

                  module load AUGUSTUS/3.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AUGUSTUS/3.4.0-foss-2021b x x x x x x AUGUSTUS/3.4.0-foss-2020b x x x x x x AUGUSTUS/3.3.3-intel-2019b - x x - x x AUGUSTUS/3.3.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/Abseil/", "title": "Abseil", "text": ""}, {"location": "available_software/detail/Abseil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Abseil, load one of these modules using a module load command like:

                  module load Abseil/20230125.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Abseil/20230125.3-GCCcore-12.3.0 x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/AdapterRemoval/", "title": "AdapterRemoval", "text": ""}, {"location": "available_software/detail/AdapterRemoval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AdapterRemoval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AdapterRemoval, load one of these modules using a module load command like:

                  module load AdapterRemoval/2.3.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AdapterRemoval/2.3.3-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/Albumentations/", "title": "Albumentations", "text": ""}, {"location": "available_software/detail/Albumentations/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Albumentations installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Albumentations, load one of these modules using a module load command like:

                  module load Albumentations/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Albumentations/1.1.0-foss-2021b x x x - x x Albumentations/1.1.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/AlphaFold/", "title": "AlphaFold", "text": ""}, {"location": "available_software/detail/AlphaFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaFold, load one of these modules using a module load command like:

                  module load AlphaFold/2.3.4-foss-2022a-ColabFold\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaFold/2.3.4-foss-2022a-ColabFold - - x - x - AlphaFold/2.3.4-foss-2022a-CUDA-11.7.0-ColabFold x - - - x - AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0 x - - - x - AlphaFold/2.3.1-foss-2022a x x x x x x AlphaFold/2.3.0-foss-2021b-CUDA-11.4.1 x - - - x - AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.2.2-foss-2021a - x x - x x AlphaFold/2.1.2-foss-2021a-CUDA-11.3.1 x - - - x - AlphaFold/2.1.2-foss-2021a - x x - x x AlphaFold/2.1.1-fosscuda-2020b x - - - x - AlphaFold/2.0.0-fosscuda-2020b x - - - x - AlphaFold/2.0.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/AlphaPulldown/", "title": "AlphaPulldown", "text": ""}, {"location": "available_software/detail/AlphaPulldown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AlphaPulldown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AlphaPulldown, load one of these modules using a module load command like:

                  module load AlphaPulldown/0.30.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AlphaPulldown/0.30.7-foss-2022a - - x - x - AlphaPulldown/0.30.4-fosscuda-2020b x - - - x - AlphaPulldown/0.30.4-foss-2020b x x x x x x"}, {"location": "available_software/detail/Altair-EDEM/", "title": "Altair-EDEM", "text": ""}, {"location": "available_software/detail/Altair-EDEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Altair-EDEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Altair-EDEM, load one of these modules using a module load command like:

                  module load Altair-EDEM/2021.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Altair-EDEM/2021.2 - x x - x -"}, {"location": "available_software/detail/Amber/", "title": "Amber", "text": ""}, {"location": "available_software/detail/Amber/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Amber installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Amber, load one of these modules using a module load command like:

                  module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/AmberMini/", "title": "AmberMini", "text": ""}, {"location": "available_software/detail/AmberMini/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberMini installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberMini, load one of these modules using a module load command like:

                  module load AmberMini/16.16.0-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberMini/16.16.0-intel-2020a - x x - x x"}, {"location": "available_software/detail/AmberTools/", "title": "AmberTools", "text": ""}, {"location": "available_software/detail/AmberTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AmberTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AmberTools, load one of these modules using a module load command like:

                  module load AmberTools/20-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AmberTools/20-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Anaconda3/", "title": "Anaconda3", "text": ""}, {"location": "available_software/detail/Anaconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Anaconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Anaconda3, load one of these modules using a module load command like:

                  module load Anaconda3/2023.03-1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Anaconda3/2023.03-1 x x x x x x Anaconda3/2020.11 - x x - x - Anaconda3/2020.07 - x - - - - Anaconda3/2020.02 - x x - x -"}, {"location": "available_software/detail/Annocript/", "title": "Annocript", "text": ""}, {"location": "available_software/detail/Annocript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Annocript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Annocript, load one of these modules using a module load command like:

                  module load Annocript/2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Annocript/2.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ArchR/", "title": "ArchR", "text": ""}, {"location": "available_software/detail/ArchR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArchR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArchR, load one of these modules using a module load command like:

                  module load ArchR/1.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArchR/1.0.2-foss-2023a-R-4.3.2 x x x x x x ArchR/1.0.1-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Archive-Zip/", "title": "Archive-Zip", "text": ""}, {"location": "available_software/detail/Archive-Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Archive-Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Archive-Zip, load one of these modules using a module load command like:

                  module load Archive-Zip/1.68-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Archive-Zip/1.68-GCCcore-11.3.0 x x x - x x Archive-Zip/1.68-GCCcore-11.2.0 x x x - x x Archive-Zip/1.68-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Arlequin/", "title": "Arlequin", "text": ""}, {"location": "available_software/detail/Arlequin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arlequin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arlequin, load one of these modules using a module load command like:

                  module load Arlequin/3.5.2.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arlequin/3.5.2.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Armadillo/", "title": "Armadillo", "text": ""}, {"location": "available_software/detail/Armadillo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Armadillo, load one of these modules using a module load command like:

                  module load Armadillo/12.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Armadillo/12.6.2-foss-2023a x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/Arrow/", "title": "Arrow", "text": ""}, {"location": "available_software/detail/Arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Arrow, load one of these modules using a module load command like:

                  module load Arrow/14.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Arrow/14.0.1-gfbf-2023a x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x Arrow/8.0.0-foss-2022a x x x x x x Arrow/6.0.0-foss-2021b x x x x x x Arrow/6.0.0-foss-2021a - x x - x x Arrow/0.17.1-intel-2020b - x x - x x Arrow/0.17.1-intel-2020a-Python-3.8.2 - x x - x x Arrow/0.17.1-fosscuda-2020b - - - - x - Arrow/0.17.1-foss-2020a-Python-3.8.2 - x x - x x Arrow/0.16.0-intel-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/ArviZ/", "title": "ArviZ", "text": ""}, {"location": "available_software/detail/ArviZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ArviZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ArviZ, load one of these modules using a module load command like:

                  module load ArviZ/0.16.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ArviZ/0.16.1-foss-2023a x x x x x x ArviZ/0.12.1-foss-2021a x x x x x x ArviZ/0.11.4-intel-2021b x x x - x x ArviZ/0.11.1-intel-2020b - x x - x x ArviZ/0.7.0-intel-2019b-Python-3.7.4 - x x - x x ArviZ/0.7.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Aspera-CLI/", "title": "Aspera-CLI", "text": ""}, {"location": "available_software/detail/Aspera-CLI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Aspera-CLI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Aspera-CLI, load one of these modules using a module load command like:

                  module load Aspera-CLI/3.9.6.1467.159c5b1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Aspera-CLI/3.9.6.1467.159c5b1 - x x - x -"}, {"location": "available_software/detail/AutoDock-Vina/", "title": "AutoDock-Vina", "text": ""}, {"location": "available_software/detail/AutoDock-Vina/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoDock-Vina installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoDock-Vina, load one of these modules using a module load command like:

                  module load AutoDock-Vina/1.2.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoDock-Vina/1.2.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/AutoGeneS/", "title": "AutoGeneS", "text": ""}, {"location": "available_software/detail/AutoGeneS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoGeneS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoGeneS, load one of these modules using a module load command like:

                  module load AutoGeneS/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoGeneS/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/AutoMap/", "title": "AutoMap", "text": ""}, {"location": "available_software/detail/AutoMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which AutoMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using AutoMap, load one of these modules using a module load command like:

                  module load AutoMap/1.0-foss-2019b-20200324\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty AutoMap/1.0-foss-2019b-20200324 - x x - x x"}, {"location": "available_software/detail/Autoconf/", "title": "Autoconf", "text": ""}, {"location": "available_software/detail/Autoconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autoconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autoconf, load one of these modules using a module load command like:

                  module load Autoconf/2.71-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autoconf/2.71-GCCcore-13.2.0 x x x x x x Autoconf/2.71-GCCcore-12.3.0 x x x x x x Autoconf/2.71-GCCcore-12.2.0 x x x x x x Autoconf/2.71-GCCcore-11.3.0 x x x x x x Autoconf/2.71-GCCcore-11.2.0 x x x x x x Autoconf/2.71-GCCcore-10.3.0 x x x x x x Autoconf/2.71 x x x x x x Autoconf/2.69-GCCcore-10.2.0 x x x x x x Autoconf/2.69-GCCcore-9.3.0 x x x x x x Autoconf/2.69-GCCcore-8.3.0 x x x x x x Autoconf/2.69-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Automake/", "title": "Automake", "text": ""}, {"location": "available_software/detail/Automake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Automake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Automake, load one of these modules using a module load command like:

                  module load Automake/1.16.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Automake/1.16.5-GCCcore-13.2.0 x x x x x x Automake/1.16.5-GCCcore-12.3.0 x x x x x x Automake/1.16.5-GCCcore-12.2.0 x x x x x x Automake/1.16.5-GCCcore-11.3.0 x x x x x x Automake/1.16.5 x x x x x x Automake/1.16.4-GCCcore-11.2.0 x x x x x x Automake/1.16.3-GCCcore-10.3.0 x x x x x x Automake/1.16.2-GCCcore-10.2.0 x x x x x x Automake/1.16.1-GCCcore-9.3.0 x x x x x x Automake/1.16.1-GCCcore-8.3.0 x x x x x x Automake/1.16.1-GCCcore-8.2.0 - x - - - - Automake/1.15.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Autotools/", "title": "Autotools", "text": ""}, {"location": "available_software/detail/Autotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Autotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Autotools, load one of these modules using a module load command like:

                  module load Autotools/20220317-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Autotools/20220317-GCCcore-13.2.0 x x x x x x Autotools/20220317-GCCcore-12.3.0 x x x x x x Autotools/20220317-GCCcore-12.2.0 x x x x x x Autotools/20220317-GCCcore-11.3.0 x x x x x x Autotools/20220317 x x x x x x Autotools/20210726-GCCcore-11.2.0 x x x x x x Autotools/20210128-GCCcore-10.3.0 x x x x x x Autotools/20200321-GCCcore-10.2.0 x x x x x x Autotools/20180311-GCCcore-9.3.0 x x x x x x Autotools/20180311-GCCcore-8.3.0 x x x x x x Autotools/20180311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Avogadro2/", "title": "Avogadro2", "text": ""}, {"location": "available_software/detail/Avogadro2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Avogadro2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Avogadro2, load one of these modules using a module load command like:

                  module load Avogadro2/1.97.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Avogadro2/1.97.0-linux-x86_64 x x x - x x"}, {"location": "available_software/detail/BAMSurgeon/", "title": "BAMSurgeon", "text": ""}, {"location": "available_software/detail/BAMSurgeon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BAMSurgeon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BAMSurgeon, load one of these modules using a module load command like:

                  module load BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BAMSurgeon/1.2-GCC-8.3.0-Python-2.7.16 - x x - x -"}, {"location": "available_software/detail/BBMap/", "title": "BBMap", "text": ""}, {"location": "available_software/detail/BBMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BBMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BBMap, load one of these modules using a module load command like:

                  module load BBMap/39.01-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BBMap/39.01-GCC-12.2.0 x x x x x x BBMap/38.98-GCC-11.2.0 x x x - x x BBMap/38.87-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/BCFtools/", "title": "BCFtools", "text": ""}, {"location": "available_software/detail/BCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BCFtools, load one of these modules using a module load command like:

                  module load BCFtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BCFtools/1.18-GCC-12.3.0 x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x BCFtools/1.15.1-GCC-11.3.0 x x x x x x BCFtools/1.14-GCC-11.2.0 x x x x x x BCFtools/1.12-GCC-10.3.0 x x x - x x BCFtools/1.12-GCC-10.2.0 - x x - x - BCFtools/1.11-GCC-10.2.0 x x x x x x BCFtools/1.10.2-iccifort-2019.5.281 - x x - x x BCFtools/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BDBag/", "title": "BDBag", "text": ""}, {"location": "available_software/detail/BDBag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BDBag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BDBag, load one of these modules using a module load command like:

                  module load BDBag/1.6.3-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BDBag/1.6.3-intel-2021b x x x - x x"}, {"location": "available_software/detail/BEDOPS/", "title": "BEDOPS", "text": ""}, {"location": "available_software/detail/BEDOPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDOPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDOPS, load one of these modules using a module load command like:

                  module load BEDOPS/2.4.41-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDOPS/2.4.41-foss-2021b x x x x x x"}, {"location": "available_software/detail/BEDTools/", "title": "BEDTools", "text": ""}, {"location": "available_software/detail/BEDTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BEDTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BEDTools, load one of these modules using a module load command like:

                  module load BEDTools/2.31.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BEDTools/2.31.0-GCC-12.3.0 x x x x x x BEDTools/2.30.0-GCC-12.2.0 x x x x x x BEDTools/2.30.0-GCC-11.3.0 x x x x x x BEDTools/2.30.0-GCC-11.2.0 x x x x x x BEDTools/2.30.0-GCC-10.2.0 - x x x x x BEDTools/2.29.2-GCC-9.3.0 - x x - x x BEDTools/2.29.2-GCC-8.3.0 - x x - x x BEDTools/2.19.1-GCC-8.3.0 - - - - - x"}, {"location": "available_software/detail/BLAST%2B/", "title": "BLAST+", "text": ""}, {"location": "available_software/detail/BLAST%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAST+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAST+, load one of these modules using a module load command like:

                  module load BLAST+/2.14.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAST+/2.14.1-gompi-2023a x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x BLAST+/2.13.0-gompi-2022a x x x x x x BLAST+/2.12.0-gompi-2021b x x x x x x BLAST+/2.11.0-gompi-2021a - x x x x x BLAST+/2.11.0-gompi-2020b x x x x x x BLAST+/2.10.1-iimpi-2020a - x x - x x BLAST+/2.10.1-gompi-2020a - x x - x x BLAST+/2.9.0-iimpi-2019b - x x - x x BLAST+/2.9.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/BLAT/", "title": "BLAT", "text": ""}, {"location": "available_software/detail/BLAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLAT, load one of these modules using a module load command like:

                  module load BLAT/3.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLAT/3.7-GCC-11.3.0 x x x x x x BLAT/3.5-GCC-9.3.0 - x x - x - BLAT/3.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BLIS/", "title": "BLIS", "text": ""}, {"location": "available_software/detail/BLIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BLIS, load one of these modules using a module load command like:

                  module load BLIS/0.9.0-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BLIS/0.9.0-GCC-13.2.0 x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x BLIS/0.9.0-GCC-11.3.0 x x x x x x BLIS/0.8.1-GCC-11.2.0 x x x x x x BLIS/0.8.1-GCC-10.3.0 x x x x x x BLIS/0.8.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/BRAKER/", "title": "BRAKER", "text": ""}, {"location": "available_software/detail/BRAKER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BRAKER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BRAKER, load one of these modules using a module load command like:

                  module load BRAKER/2.1.6-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BRAKER/2.1.6-foss-2021b x x x x x x BRAKER/2.1.6-foss-2020b x x x - x x BRAKER/2.1.5-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BSMAPz/", "title": "BSMAPz", "text": ""}, {"location": "available_software/detail/BSMAPz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSMAPz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSMAPz, load one of these modules using a module load command like:

                  module load BSMAPz/1.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSMAPz/1.1.1-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/BSseeker2/", "title": "BSseeker2", "text": ""}, {"location": "available_software/detail/BSseeker2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BSseeker2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BSseeker2, load one of these modules using a module load command like:

                  module load BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BSseeker2/2.1.8-iccifort-2019.5.281-Python-2.7.16 - x - - - - BSseeker2/2.1.8-GCC-8.3.0-Python-2.7.16 - x - - - -"}, {"location": "available_software/detail/BUSCO/", "title": "BUSCO", "text": ""}, {"location": "available_software/detail/BUSCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUSCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUSCO, load one of these modules using a module load command like:

                  module load BUSCO/5.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUSCO/5.4.3-foss-2021b x x x - x x BUSCO/5.1.2-foss-2020b - x x x x - BUSCO/4.1.2-foss-2020b - x x - x x BUSCO/4.0.6-foss-2020b - x x x x x BUSCO/4.0.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BUStools/", "title": "BUStools", "text": ""}, {"location": "available_software/detail/BUStools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BUStools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BUStools, load one of these modules using a module load command like:

                  module load BUStools/0.43.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BUStools/0.43.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/BWA/", "title": "BWA", "text": ""}, {"location": "available_software/detail/BWA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BWA, load one of these modules using a module load command like:

                  module load BWA/0.7.17-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BWA/0.7.17-iccifort-2019.5.281 - x - - - - BWA/0.7.17-GCCcore-12.3.0 x x x x x x BWA/0.7.17-GCCcore-12.2.0 x x x x x x BWA/0.7.17-GCCcore-11.3.0 x x x x x x BWA/0.7.17-GCCcore-11.2.0 x x x x x x BWA/0.7.17-GCC-10.2.0 - x x x x x BWA/0.7.17-GCC-9.3.0 - x x - x x BWA/0.7.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/BamTools/", "title": "BamTools", "text": ""}, {"location": "available_software/detail/BamTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BamTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BamTools, load one of these modules using a module load command like:

                  module load BamTools/2.5.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BamTools/2.5.2-GCC-12.3.0 x x x x x x BamTools/2.5.2-GCC-12.2.0 x x x x x x BamTools/2.5.2-GCC-11.3.0 x x x x x x BamTools/2.5.2-GCC-11.2.0 x x x x x x BamTools/2.5.1-iccifort-2019.5.281 - x x - x x BamTools/2.5.1-GCC-10.2.0 x x x x x x BamTools/2.5.1-GCC-9.3.0 - x x - x x BamTools/2.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bambi/", "title": "Bambi", "text": ""}, {"location": "available_software/detail/Bambi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bambi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bambi, load one of these modules using a module load command like:

                  module load Bambi/0.7.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bambi/0.7.1-intel-2021b x x x - x x"}, {"location": "available_software/detail/Bandage/", "title": "Bandage", "text": ""}, {"location": "available_software/detail/Bandage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bandage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bandage, load one of these modules using a module load command like:

                  module load Bandage/0.9.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bandage/0.9.0-GCCcore-11.2.0 x x x - x x Bandage/0.8.1_Centos - x x x x x"}, {"location": "available_software/detail/BatMeth2/", "title": "BatMeth2", "text": ""}, {"location": "available_software/detail/BatMeth2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BatMeth2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BatMeth2, load one of these modules using a module load command like:

                  module load BatMeth2/2.1-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BatMeth2/2.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/BayeScEnv/", "title": "BayeScEnv", "text": ""}, {"location": "available_software/detail/BayeScEnv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScEnv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScEnv, load one of these modules using a module load command like:

                  module load BayeScEnv/1.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScEnv/1.1-iccifort-2019.5.281 - x - - - - BayeScEnv/1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/BayeScan/", "title": "BayeScan", "text": ""}, {"location": "available_software/detail/BayeScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayeScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayeScan, load one of these modules using a module load command like:

                  module load BayeScan/2.1-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayeScan/2.1-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/BayesAss3-SNPs/", "title": "BayesAss3-SNPs", "text": ""}, {"location": "available_software/detail/BayesAss3-SNPs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesAss3-SNPs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesAss3-SNPs, load one of these modules using a module load command like:

                  module load BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesAss3-SNPs/1.1-2022.02.19-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/BayesPrism/", "title": "BayesPrism", "text": ""}, {"location": "available_software/detail/BayesPrism/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BayesPrism installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BayesPrism, load one of these modules using a module load command like:

                  module load BayesPrism/2.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BayesPrism/2.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Bazel/", "title": "Bazel", "text": ""}, {"location": "available_software/detail/Bazel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bazel, load one of these modules using a module load command like:

                  module load Bazel/6.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bazel/6.3.1-GCCcore-12.3.0 x x x x x x Bazel/6.3.1-GCCcore-12.2.0 x x x x x x Bazel/5.1.1-GCCcore-11.3.0 x x x x x x Bazel/4.2.2-GCCcore-11.2.0 - - - x - - Bazel/3.7.2-GCCcore-11.2.0 x x x x x x Bazel/3.7.2-GCCcore-10.3.0 x x x x x x Bazel/3.7.2-GCCcore-10.2.0 x x x x x x Bazel/3.6.0-GCCcore-9.3.0 - x x - x x Bazel/3.4.1-GCCcore-8.3.0 - - x - x x Bazel/2.0.0-GCCcore-10.2.0 - x x x x x Bazel/2.0.0-GCCcore-8.3.0 - x x - x x Bazel/0.29.1-GCCcore-8.3.0 - x x - x x Bazel/0.26.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Beast/", "title": "Beast", "text": ""}, {"location": "available_software/detail/Beast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Beast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Beast, load one of these modules using a module load command like:

                  module load Beast/2.7.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Beast/2.7.3-GCC-11.3.0 x x x x x x Beast/2.6.4-GCC-10.2.0 - x x - x - Beast/1.10.5pre1-GCC-11.3.0 x x x - x x Beast/1.10.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/BeautifulSoup/", "title": "BeautifulSoup", "text": ""}, {"location": "available_software/detail/BeautifulSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BeautifulSoup, load one of these modules using a module load command like:

                  module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x BeautifulSoup/4.11.1-GCCcore-12.2.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.3.0 x x x x x x BeautifulSoup/4.10.0-GCCcore-11.2.0 x x x - x x BeautifulSoup/4.10.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/BerkeleyGW/", "title": "BerkeleyGW", "text": ""}, {"location": "available_software/detail/BerkeleyGW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BerkeleyGW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BerkeleyGW, load one of these modules using a module load command like:

                  module load BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BerkeleyGW/2.1.0-intel-2019b-Python-3.7.4 - x x - x x BerkeleyGW/2.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BiG-SCAPE/", "title": "BiG-SCAPE", "text": ""}, {"location": "available_software/detail/BiG-SCAPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BiG-SCAPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BiG-SCAPE, load one of these modules using a module load command like:

                  module load BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BiG-SCAPE/1.0.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/BigDFT/", "title": "BigDFT", "text": ""}, {"location": "available_software/detail/BigDFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BigDFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BigDFT, load one of these modules using a module load command like:

                  module load BigDFT/1.9.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BigDFT/1.9.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/BinSanity/", "title": "BinSanity", "text": ""}, {"location": "available_software/detail/BinSanity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BinSanity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BinSanity, load one of these modules using a module load command like:

                  module load BinSanity/0.3.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BinSanity/0.3.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Bio-DB-HTS/", "title": "Bio-DB-HTS", "text": ""}, {"location": "available_software/detail/Bio-DB-HTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-DB-HTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-DB-HTS, load one of these modules using a module load command like:

                  module load Bio-DB-HTS/3.01-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-DB-HTS/3.01-GCC-11.3.0 x x x - x x Bio-DB-HTS/3.01-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Bio-EUtilities/", "title": "Bio-EUtilities", "text": ""}, {"location": "available_software/detail/Bio-EUtilities/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-EUtilities installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-EUtilities, load one of these modules using a module load command like:

                  module load Bio-EUtilities/1.76-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-EUtilities/1.76-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bio-SearchIO-hmmer/", "title": "Bio-SearchIO-hmmer", "text": ""}, {"location": "available_software/detail/Bio-SearchIO-hmmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bio-SearchIO-hmmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

                  module load Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bio-SearchIO-hmmer/1.7.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/BioPerl/", "title": "BioPerl", "text": ""}, {"location": "available_software/detail/BioPerl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which BioPerl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using BioPerl, load one of these modules using a module load command like:

                  module load BioPerl/1.7.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty BioPerl/1.7.8-GCCcore-11.3.0 x x x x x x BioPerl/1.7.8-GCCcore-11.2.0 x x x x x x BioPerl/1.7.8-GCCcore-10.2.0 - x x x x x BioPerl/1.7.7-GCCcore-9.3.0 - x x - x x BioPerl/1.7.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Biopython/", "title": "Biopython", "text": ""}, {"location": "available_software/detail/Biopython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Biopython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Biopython, load one of these modules using a module load command like:

                  module load Biopython/1.83-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Biopython/1.83-foss-2023a x x x x x x Biopython/1.81-foss-2022b x x x x x x Biopython/1.79-foss-2022a x x x x x x Biopython/1.79-foss-2021b x x x x x x Biopython/1.79-foss-2021a x x x x x x Biopython/1.78-intel-2020b - x x - x x Biopython/1.78-intel-2020a-Python-3.8.2 - x x - x x Biopython/1.78-fosscuda-2020b x - - - x - Biopython/1.78-foss-2020b x x x x x x Biopython/1.78-foss-2020a-Python-3.8.2 - x x - x x Biopython/1.76-foss-2021b-Python-2.7.18 x x x x x x Biopython/1.76-foss-2020b-Python-2.7.18 - x x x x x Biopython/1.75-intel-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-3.7.4 - x x - x x Biopython/1.75-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Bismark/", "title": "Bismark", "text": ""}, {"location": "available_software/detail/Bismark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bismark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bismark, load one of these modules using a module load command like:

                  module load Bismark/0.23.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bismark/0.23.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/Bison/", "title": "Bison", "text": ""}, {"location": "available_software/detail/Bison/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bison, load one of these modules using a module load command like:

                  module load Bison/3.8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bison/3.8.2-GCCcore-13.2.0 x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x Bison/3.8.2-GCCcore-11.3.0 x x x x x x Bison/3.8.2 x x x x x x Bison/3.7.6-GCCcore-11.2.0 x x x x x x Bison/3.7.6-GCCcore-10.3.0 x x x x x x Bison/3.7.6 x x x - x - Bison/3.7.1-GCCcore-10.2.0 x x x x x x Bison/3.7.1 x x x - x - Bison/3.5.3-GCCcore-9.3.0 x x x x x x Bison/3.5.3 x x x - x - Bison/3.3.2-GCCcore-8.3.0 x x x x x x Bison/3.3.2 x x x x x x Bison/3.0.5-GCCcore-8.2.0 - x - - - - Bison/3.0.5 - x - - - x Bison/3.0.4 x x x x x x"}, {"location": "available_software/detail/Blender/", "title": "Blender", "text": ""}, {"location": "available_software/detail/Blender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blender, load one of these modules using a module load command like:

                  module load Blender/3.5.0-linux-x86_64-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blender/3.5.0-linux-x86_64-CUDA-11.7.0 x x x x x x Blender/3.3.1-linux-x86_64-CUDA-11.7.0 x - - - x - Blender/3.3.1-linux-x86_64 x x x - x x Blender/2.81-intel-2019b-Python-3.7.4 - x x - x x Blender/2.81-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Block/", "title": "Block", "text": ""}, {"location": "available_software/detail/Block/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Block installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Block, load one of these modules using a module load command like:

                  module load Block/1.5.3-20200525-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Block/1.5.3-20200525-foss-2022b x x x x x x Block/1.5.3-20200525-foss-2022a - x x x x x"}, {"location": "available_software/detail/Blosc/", "title": "Blosc", "text": ""}, {"location": "available_software/detail/Blosc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc, load one of these modules using a module load command like:

                  module load Blosc/1.21.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc/1.21.3-GCCcore-11.3.0 x x x x x x Blosc/1.21.1-GCCcore-11.2.0 x x x x x x Blosc/1.21.0-GCCcore-10.3.0 x x x x x x Blosc/1.21.0-GCCcore-10.2.0 - x x x x x Blosc/1.17.1-GCCcore-9.3.0 x x x x x x Blosc/1.17.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Blosc2/", "title": "Blosc2", "text": ""}, {"location": "available_software/detail/Blosc2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Blosc2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Blosc2, load one of these modules using a module load command like:

                  module load Blosc2/2.6.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Blosc2/2.6.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Bonito/", "title": "Bonito", "text": ""}, {"location": "available_software/detail/Bonito/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonito installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonito, load one of these modules using a module load command like:

                  module load Bonito/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonito/0.4.0-fosscuda-2020b - - - - x - Bonito/0.3.8-fosscuda-2020b - - - - x - Bonito/0.1.0-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/Bonnie%2B%2B/", "title": "Bonnie++", "text": ""}, {"location": "available_software/detail/Bonnie%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bonnie++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bonnie++, load one of these modules using a module load command like:

                  module load Bonnie++/2.00a-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bonnie++/2.00a-GCC-10.3.0 - x - - - -"}, {"location": "available_software/detail/Boost.MPI/", "title": "Boost.MPI", "text": ""}, {"location": "available_software/detail/Boost.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.MPI, load one of these modules using a module load command like:

                  module load Boost.MPI/1.81.0-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.MPI/1.81.0-gompi-2022b x x x x x x Boost.MPI/1.79.0-gompi-2022a - x x x x x Boost.MPI/1.77.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Boost.Python-NumPy/", "title": "Boost.Python-NumPy", "text": ""}, {"location": "available_software/detail/Boost.Python-NumPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python-NumPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python-NumPy, load one of these modules using a module load command like:

                  module load Boost.Python-NumPy/1.79.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python-NumPy/1.79.0-foss-2022a - - x - x -"}, {"location": "available_software/detail/Boost.Python/", "title": "Boost.Python", "text": ""}, {"location": "available_software/detail/Boost.Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost.Python, load one of these modules using a module load command like:

                  module load Boost.Python/1.79.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost.Python/1.79.0-GCC-11.3.0 x x x x x x Boost.Python/1.77.0-GCC-11.2.0 x x x - x x Boost.Python/1.72.0-iimpi-2020a - x x - x x Boost.Python/1.71.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Boost/", "title": "Boost", "text": ""}, {"location": "available_software/detail/Boost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Boost, load one of these modules using a module load command like:

                  module load Boost/1.82.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Boost/1.82.0-GCC-12.3.0 x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x Boost/1.79.0-GCC-11.3.0 x x x x x x Boost/1.79.0-GCC-11.2.0 x x x x x x Boost/1.77.0-intel-compilers-2021.4.0 x x x x x x Boost/1.77.0-GCC-11.2.0 x x x x x x Boost/1.76.0-intel-compilers-2021.2.0 - x x - x x Boost/1.76.0-GCC-10.3.0 x x x x x x Boost/1.75.0-GCC-11.2.0 x x x x x x Boost/1.74.0-iccifort-2020.4.304 - x x x x x Boost/1.74.0-GCC-10.2.0 x x x x x x Boost/1.72.0-iompi-2020a - x - - - - Boost/1.72.0-iimpi-2020a x x x x x x Boost/1.72.0-gompi-2020a - x x - x x Boost/1.71.0-iimpi-2019b - x x - x x Boost/1.71.0-gompi-2019b x x x - x x"}, {"location": "available_software/detail/Bottleneck/", "title": "Bottleneck", "text": ""}, {"location": "available_software/detail/Bottleneck/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bottleneck installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bottleneck, load one of these modules using a module load command like:

                  module load Bottleneck/1.3.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bottleneck/1.3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Bowtie/", "title": "Bowtie", "text": ""}, {"location": "available_software/detail/Bowtie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie, load one of these modules using a module load command like:

                  module load Bowtie/1.3.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie/1.3.1-GCC-11.3.0 x x x x x x Bowtie/1.3.1-GCC-11.2.0 x x x x x x Bowtie/1.3.0-GCC-10.2.0 - x x - x - Bowtie/1.2.3-iccifort-2019.5.281 - x - - - - Bowtie/1.2.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bowtie2/", "title": "Bowtie2", "text": ""}, {"location": "available_software/detail/Bowtie2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bowtie2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bowtie2, load one of these modules using a module load command like:

                  module load Bowtie2/2.4.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bowtie2/2.4.5-GCC-11.3.0 x x x x x x Bowtie2/2.4.4-GCC-11.2.0 x x x - x x Bowtie2/2.4.2-GCC-10.2.0 - x x x x x Bowtie2/2.4.1-GCC-9.3.0 - x x - x x Bowtie2/2.3.5.1-iccifort-2019.5.281 - x - - - - Bowtie2/2.3.5.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Bracken/", "title": "Bracken", "text": ""}, {"location": "available_software/detail/Bracken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Bracken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Bracken, load one of these modules using a module load command like:

                  module load Bracken/2.9-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Bracken/2.9-GCCcore-10.3.0 x x x x x x Bracken/2.7-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Brotli-python/", "title": "Brotli-python", "text": ""}, {"location": "available_software/detail/Brotli-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli-python, load one of these modules using a module load command like:

                  module load Brotli-python/1.0.9-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli-python/1.0.9-GCCcore-11.3.0 x x x x x x Brotli-python/1.0.9-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Brotli/", "title": "Brotli", "text": ""}, {"location": "available_software/detail/Brotli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brotli, load one of these modules using a module load command like:

                  module load Brotli/1.1.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brotli/1.1.0-GCCcore-13.2.0 x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x Brotli/1.0.9-GCCcore-11.3.0 x x x x x x Brotli/1.0.9-GCCcore-11.2.0 x x x x x x Brotli/1.0.9-GCCcore-10.3.0 x x x x x x Brotli/1.0.9-GCCcore-10.2.0 x - x x x x"}, {"location": "available_software/detail/Brunsli/", "title": "Brunsli", "text": ""}, {"location": "available_software/detail/Brunsli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Brunsli, load one of these modules using a module load command like:

                  module load Brunsli/0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Brunsli/0.1-GCCcore-12.3.0 x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x Brunsli/0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CASPR/", "title": "CASPR", "text": ""}, {"location": "available_software/detail/CASPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CASPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CASPR, load one of these modules using a module load command like:

                  module load CASPR/20200730-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CASPR/20200730-foss-2022a x x x x x x"}, {"location": "available_software/detail/CCL/", "title": "CCL", "text": ""}, {"location": "available_software/detail/CCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CCL, load one of these modules using a module load command like:

                  module load CCL/1.12.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CCL/1.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/CD-HIT/", "title": "CD-HIT", "text": ""}, {"location": "available_software/detail/CD-HIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CD-HIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CD-HIT, load one of these modules using a module load command like:

                  module load CD-HIT/4.8.1-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CD-HIT/4.8.1-iccifort-2019.5.281 - x x - x x CD-HIT/4.8.1-GCC-12.2.0 x x x x x x CD-HIT/4.8.1-GCC-11.2.0 x x x - x x CD-HIT/4.8.1-GCC-10.2.0 - x x x x x CD-HIT/4.8.1-GCC-9.3.0 - x x - x x CD-HIT/4.8.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/CDAT/", "title": "CDAT", "text": ""}, {"location": "available_software/detail/CDAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDAT, load one of these modules using a module load command like:

                  module load CDAT/8.2.1-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDAT/8.2.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/CDBtools/", "title": "CDBtools", "text": ""}, {"location": "available_software/detail/CDBtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDBtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDBtools, load one of these modules using a module load command like:

                  module load CDBtools/0.99-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDBtools/0.99-GCC-10.2.0 x x x - x x"}, {"location": "available_software/detail/CDO/", "title": "CDO", "text": ""}, {"location": "available_software/detail/CDO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CDO, load one of these modules using a module load command like:

                  module load CDO/2.0.5-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CDO/2.0.5-gompi-2021b x x x x x x CDO/1.9.10-gompi-2021a x x x - x x CDO/1.9.8-intel-2019b - x x - x x"}, {"location": "available_software/detail/CENSO/", "title": "CENSO", "text": ""}, {"location": "available_software/detail/CENSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CENSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CENSO, load one of these modules using a module load command like:

                  module load CENSO/1.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CENSO/1.2.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/CESM-deps/", "title": "CESM-deps", "text": ""}, {"location": "available_software/detail/CESM-deps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CESM-deps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CESM-deps, load one of these modules using a module load command like:

                  module load CESM-deps/2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CESM-deps/2-foss-2021b x x x - x x"}, {"location": "available_software/detail/CFDEMcoupling/", "title": "CFDEMcoupling", "text": ""}, {"location": "available_software/detail/CFDEMcoupling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFDEMcoupling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFDEMcoupling, load one of these modules using a module load command like:

                  module load CFDEMcoupling/3.8.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFDEMcoupling/3.8.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/CFITSIO/", "title": "CFITSIO", "text": ""}, {"location": "available_software/detail/CFITSIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CFITSIO, load one of these modules using a module load command like:

                  module load CFITSIO/4.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x CFITSIO/4.2.0-GCCcore-11.3.0 x x x x x x CFITSIO/4.1.0-GCCcore-11.3.0 x x x x x x CFITSIO/3.49-GCCcore-11.2.0 x x x x x x CFITSIO/3.47-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CGAL/", "title": "CGAL", "text": ""}, {"location": "available_software/detail/CGAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGAL, load one of these modules using a module load command like:

                  module load CGAL/5.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGAL/5.6-GCCcore-12.3.0 x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x CGAL/5.2-iimpi-2020b - x - - - - CGAL/5.2-gompi-2020b x x x x x x CGAL/4.14.3-iimpi-2021a - x x - x x CGAL/4.14.3-gompi-2022a x x x x x x CGAL/4.14.3-gompi-2021b x x x x x x CGAL/4.14.3-gompi-2021a x x x x x x CGAL/4.14.3-gompi-2020a-Python-3.8.2 - x x - x x CGAL/4.14.1-foss-2019b-Python-3.7.4 x x x - x x CGAL/4.14.1-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/CGmapTools/", "title": "CGmapTools", "text": ""}, {"location": "available_software/detail/CGmapTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CGmapTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CGmapTools, load one of these modules using a module load command like:

                  module load CGmapTools/0.1.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CGmapTools/0.1.2-intel-2019b - x x - x x"}, {"location": "available_software/detail/CIRCexplorer2/", "title": "CIRCexplorer2", "text": ""}, {"location": "available_software/detail/CIRCexplorer2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRCexplorer2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRCexplorer2, load one of these modules using a module load command like:

                  module load CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRCexplorer2/2.3.8-foss-2021b-Python-2.7.18 x x x x x x CIRCexplorer2/2.3.8-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CIRI-long/", "title": "CIRI-long", "text": ""}, {"location": "available_software/detail/CIRI-long/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRI-long installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRI-long, load one of these modules using a module load command like:

                  module load CIRI-long/1.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRI-long/1.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/CIRIquant/", "title": "CIRIquant", "text": ""}, {"location": "available_software/detail/CIRIquant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CIRIquant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CIRIquant, load one of these modules using a module load command like:

                  module load CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CIRIquant/1.1.2-20221201-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CITE-seq-Count/", "title": "CITE-seq-Count", "text": ""}, {"location": "available_software/detail/CITE-seq-Count/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CITE-seq-Count installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CITE-seq-Count, load one of these modules using a module load command like:

                  module load CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CITE-seq-Count/1.4.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/CLEAR/", "title": "CLEAR", "text": ""}, {"location": "available_software/detail/CLEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLEAR, load one of these modules using a module load command like:

                  module load CLEAR/20210117-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLEAR/20210117-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/CLHEP/", "title": "CLHEP", "text": ""}, {"location": "available_software/detail/CLHEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CLHEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CLHEP, load one of these modules using a module load command like:

                  module load CLHEP/2.4.6.4-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CLHEP/2.4.6.4-GCC-12.2.0 x x x x x x CLHEP/2.4.5.3-GCC-11.3.0 x x x x x x CLHEP/2.4.5.1-GCC-11.2.0 x x x x x x CLHEP/2.4.4.0-GCC-11.2.0 x x x x x x CLHEP/2.4.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/CMAverse/", "title": "CMAverse", "text": ""}, {"location": "available_software/detail/CMAverse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMAverse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMAverse, load one of these modules using a module load command like:

                  module load CMAverse/20220112-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMAverse/20220112-foss-2021b x x x - x x"}, {"location": "available_software/detail/CMSeq/", "title": "CMSeq", "text": ""}, {"location": "available_software/detail/CMSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMSeq, load one of these modules using a module load command like:

                  module load CMSeq/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMSeq/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CMake/", "title": "CMake", "text": ""}, {"location": "available_software/detail/CMake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CMake, load one of these modules using a module load command like:

                  module load CMake/3.27.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CMake/3.27.6-GCCcore-13.2.0 x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x CMake/3.24.3-GCCcore-11.3.0 x x x x x x CMake/3.23.1-GCCcore-11.3.0 x x x x x x CMake/3.22.1-GCCcore-11.2.0 x x x x x x CMake/3.21.1-GCCcore-11.2.0 x x x x x x CMake/3.20.1-GCCcore-10.3.0 x x x x x x CMake/3.20.1-GCCcore-10.2.0 x - - - - - CMake/3.18.4-GCCcore-10.2.0 x x x x x x CMake/3.16.4-GCCcore-9.3.0 x x x x x x CMake/3.15.3-GCCcore-8.3.0 x x x x x x CMake/3.13.3-GCCcore-8.2.0 - x - - - - CMake/3.12.1 x x x x x x CMake/3.11.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/COLMAP/", "title": "COLMAP", "text": ""}, {"location": "available_software/detail/COLMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which COLMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using COLMAP, load one of these modules using a module load command like:

                  module load COLMAP/3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty COLMAP/3.8-foss-2022b x x x x x x"}, {"location": "available_software/detail/CONCOCT/", "title": "CONCOCT", "text": ""}, {"location": "available_software/detail/CONCOCT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CONCOCT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CONCOCT, load one of these modules using a module load command like:

                  module load CONCOCT/1.1.0-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CONCOCT/1.1.0-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CP2K/", "title": "CP2K", "text": ""}, {"location": "available_software/detail/CP2K/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CP2K installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CP2K, load one of these modules using a module load command like:

                  module load CP2K/2023.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CP2K/2023.1-foss-2023a x x x x x x CP2K/2023.1-foss-2022b x x x x x x CP2K/2022.1-foss-2022a x x x x x x CP2K/9.1-foss-2022a x x x x x x CP2K/8.2-foss-2021a - x x x x - CP2K/8.1-foss-2020b - x x x x - CP2K/7.1-intel-2020a - x x - x x CP2K/7.1-foss-2020a - x x - x x CP2K/6.1-intel-2020a - x x - x x CP2K/5.1-iomkl-2020a - x - - - - CP2K/5.1-intel-2020a-O1 - x - - - - CP2K/5.1-intel-2020a - x x - x x CP2K/5.1-intel-2019b - x - - - - CP2K/5.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/CPC2/", "title": "CPC2", "text": ""}, {"location": "available_software/detail/CPC2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPC2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPC2, load one of these modules using a module load command like:

                  module load CPC2/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPC2/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CPLEX/", "title": "CPLEX", "text": ""}, {"location": "available_software/detail/CPLEX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPLEX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPLEX, load one of these modules using a module load command like:

                  module load CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPLEX/12.10-GCCcore-8.3.0-Python-3.7.4 x x x x x x"}, {"location": "available_software/detail/CPPE/", "title": "CPPE", "text": ""}, {"location": "available_software/detail/CPPE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CPPE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CPPE, load one of these modules using a module load command like:

                  module load CPPE/0.3.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CPPE/0.3.1-GCC-12.2.0 x x x x x x CPPE/0.3.1-GCC-11.3.0 - x x x x x"}, {"location": "available_software/detail/CREST/", "title": "CREST", "text": ""}, {"location": "available_software/detail/CREST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CREST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CREST, load one of these modules using a module load command like:

                  module load CREST/2.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CREST/2.12-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CRISPR-DAV/", "title": "CRISPR-DAV", "text": ""}, {"location": "available_software/detail/CRISPR-DAV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPR-DAV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPR-DAV, load one of these modules using a module load command like:

                  module load CRISPR-DAV/2.3.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPR-DAV/2.3.4-foss-2020b - x x x x -"}, {"location": "available_software/detail/CRISPResso2/", "title": "CRISPResso2", "text": ""}, {"location": "available_software/detail/CRISPResso2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRISPResso2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRISPResso2, load one of these modules using a module load command like:

                  module load CRISPResso2/2.2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRISPResso2/2.2.1-foss-2020b - x x x x x CRISPResso2/2.1.2-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/CRYSTAL17/", "title": "CRYSTAL17", "text": ""}, {"location": "available_software/detail/CRYSTAL17/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CRYSTAL17 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CRYSTAL17, load one of these modules using a module load command like:

                  module load CRYSTAL17/1.0.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CRYSTAL17/1.0.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/CSBDeep/", "title": "CSBDeep", "text": ""}, {"location": "available_software/detail/CSBDeep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CSBDeep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CSBDeep, load one of these modules using a module load command like:

                  module load CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CSBDeep/0.7.4-foss-2022a-CUDA-11.7.0 x - - - x - CSBDeep/0.7.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/CUDA/", "title": "CUDA", "text": ""}, {"location": "available_software/detail/CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDA, load one of these modules using a module load command like:

                  module load CUDA/12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDA/12.1.1 x - x - x - CUDA/11.7.0 x x x x x x CUDA/11.4.1 x - - - x - CUDA/11.3.1 x x x - x x CUDA/11.1.1-iccifort-2020.4.304 - - - - x - CUDA/11.1.1-GCC-10.2.0 x x x x x x CUDA/11.0.2-iccifort-2020.1.217 - - - - x - CUDA/10.1.243-iccifort-2019.5.281 - - - - x - CUDA/10.1.243-GCC-8.3.0 x - - - x -"}, {"location": "available_software/detail/CUDAcore/", "title": "CUDAcore", "text": ""}, {"location": "available_software/detail/CUDAcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUDAcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUDAcore, load one of these modules using a module load command like:

                  module load CUDAcore/11.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUDAcore/11.2.1 x - x - x - CUDAcore/11.1.1 x x x x x x CUDAcore/11.0.2 - - - - x -"}, {"location": "available_software/detail/CUnit/", "title": "CUnit", "text": ""}, {"location": "available_software/detail/CUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CUnit, load one of these modules using a module load command like:

                  module load CUnit/2.1-3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CUnit/2.1-3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/CVXOPT/", "title": "CVXOPT", "text": ""}, {"location": "available_software/detail/CVXOPT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CVXOPT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CVXOPT, load one of these modules using a module load command like:

                  module load CVXOPT/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CVXOPT/1.3.1-foss-2022a x x x x x x CVXOPT/1.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Calib/", "title": "Calib", "text": ""}, {"location": "available_software/detail/Calib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Calib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Calib, load one of these modules using a module load command like:

                  module load Calib/0.3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Calib/0.3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/Cantera/", "title": "Cantera", "text": ""}, {"location": "available_software/detail/Cantera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cantera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cantera, load one of these modules using a module load command like:

                  module load Cantera/3.0.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cantera/3.0.0-foss-2023a x x x x x x Cantera/2.6.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/CapnProto/", "title": "CapnProto", "text": ""}, {"location": "available_software/detail/CapnProto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CapnProto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CapnProto, load one of these modules using a module load command like:

                  module load CapnProto/1.0.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x CapnProto/0.9.1-GCCcore-11.2.0 x x x - x x CapnProto/0.8.0-GCCcore-9.3.0 - x x x - x"}, {"location": "available_software/detail/Cartopy/", "title": "Cartopy", "text": ""}, {"location": "available_software/detail/Cartopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cartopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cartopy, load one of these modules using a module load command like:

                  module load Cartopy/0.22.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cartopy/0.22.0-foss-2023a x x x x x x Cartopy/0.20.3-foss-2022a x x x x x x Cartopy/0.20.3-foss-2021b x x x x x x Cartopy/0.19.0.post1-intel-2020b - x x - x x Cartopy/0.19.0.post1-foss-2020b - x x x x x Cartopy/0.18.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Casanovo/", "title": "Casanovo", "text": ""}, {"location": "available_software/detail/Casanovo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Casanovo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Casanovo, load one of these modules using a module load command like:

                  module load Casanovo/3.3.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Casanovo/3.3.0-foss-2022a-CUDA-11.7.0 x - - - x - Casanovo/3.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CatBoost/", "title": "CatBoost", "text": ""}, {"location": "available_software/detail/CatBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatBoost, load one of these modules using a module load command like:

                  module load CatBoost/1.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatBoost/1.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/CatLearn/", "title": "CatLearn", "text": ""}, {"location": "available_software/detail/CatLearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatLearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatLearn, load one of these modules using a module load command like:

                  module load CatLearn/0.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatLearn/0.6.2-intel-2022a x x x x x x"}, {"location": "available_software/detail/CatMAP/", "title": "CatMAP", "text": ""}, {"location": "available_software/detail/CatMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CatMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CatMAP, load one of these modules using a module load command like:

                  module load CatMAP/20220519-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CatMAP/20220519-foss-2022a x x x x x x"}, {"location": "available_software/detail/Catch2/", "title": "Catch2", "text": ""}, {"location": "available_software/detail/Catch2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Catch2, load one of these modules using a module load command like:

                  module load Catch2/2.13.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Catch2/2.13.9-GCCcore-13.2.0 x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Cbc/", "title": "Cbc", "text": ""}, {"location": "available_software/detail/Cbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cbc, load one of these modules using a module load command like:

                  module load Cbc/2.10.11-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cbc/2.10.11-foss-2023a x x x x x x Cbc/2.10.5-foss-2022b x x x x x x"}, {"location": "available_software/detail/CellBender/", "title": "CellBender", "text": ""}, {"location": "available_software/detail/CellBender/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellBender installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellBender, load one of these modules using a module load command like:

                  module load CellBender/0.3.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellBender/0.3.1-foss-2022a-CUDA-11.7.0 x - x - x - CellBender/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellOracle/", "title": "CellOracle", "text": ""}, {"location": "available_software/detail/CellOracle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellOracle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellOracle, load one of these modules using a module load command like:

                  module load CellOracle/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellOracle/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/CellProfiler/", "title": "CellProfiler", "text": ""}, {"location": "available_software/detail/CellProfiler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellProfiler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellProfiler, load one of these modules using a module load command like:

                  module load CellProfiler/4.2.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellProfiler/4.2.4-foss-2021a x x x - x x"}, {"location": "available_software/detail/CellRanger-ATAC/", "title": "CellRanger-ATAC", "text": ""}, {"location": "available_software/detail/CellRanger-ATAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger-ATAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger-ATAC, load one of these modules using a module load command like:

                  module load CellRanger-ATAC/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger-ATAC/2.1.0 x x x x x x CellRanger-ATAC/2.0.0 - x x - x -"}, {"location": "available_software/detail/CellRanger/", "title": "CellRanger", "text": ""}, {"location": "available_software/detail/CellRanger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRanger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRanger, load one of these modules using a module load command like:

                  module load CellRanger/7.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRanger/7.0.0 - x x x x x CellRanger/6.1.2 - x x - x x CellRanger/6.0.1 - x x - x - CellRanger/4.0.0 - - x - x - CellRanger/3.1.0 - - x - x -"}, {"location": "available_software/detail/CellRank/", "title": "CellRank", "text": ""}, {"location": "available_software/detail/CellRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellRank, load one of these modules using a module load command like:

                  module load CellRank/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellRank/2.0.2-foss-2022a x x x x x x CellRank/1.4.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/CellTypist/", "title": "CellTypist", "text": ""}, {"location": "available_software/detail/CellTypist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CellTypist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CellTypist, load one of these modules using a module load command like:

                  module load CellTypist/1.6.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CellTypist/1.6.2-foss-2023a x x x x x x CellTypist/1.0.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Cellpose/", "title": "Cellpose", "text": ""}, {"location": "available_software/detail/Cellpose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cellpose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cellpose, load one of these modules using a module load command like:

                  module load Cellpose/2.2.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cellpose/2.2.2-foss-2022a-CUDA-11.7.0 x - - - x - Cellpose/2.2.2-foss-2022a x - x x x x"}, {"location": "available_software/detail/Centrifuge/", "title": "Centrifuge", "text": ""}, {"location": "available_software/detail/Centrifuge/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Centrifuge installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Centrifuge, load one of these modules using a module load command like:

                  module load Centrifuge/1.0.4-beta-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Centrifuge/1.0.4-beta-gompi-2020a - x x - x x"}, {"location": "available_software/detail/Cereal/", "title": "Cereal", "text": ""}, {"location": "available_software/detail/Cereal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cereal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cereal, load one of these modules using a module load command like:

                  module load Cereal/1.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cereal/1.3.0 x x x x x x"}, {"location": "available_software/detail/Ceres-Solver/", "title": "Ceres-Solver", "text": ""}, {"location": "available_software/detail/Ceres-Solver/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ceres-Solver installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ceres-Solver, load one of these modules using a module load command like:

                  module load Ceres-Solver/2.2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ceres-Solver/2.2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Cgl/", "title": "Cgl", "text": ""}, {"location": "available_software/detail/Cgl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cgl, load one of these modules using a module load command like:

                  module load Cgl/0.60.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cgl/0.60.8-foss-2023a x x x x x x Cgl/0.60.7-foss-2022b x x x x x x"}, {"location": "available_software/detail/CharLS/", "title": "CharLS", "text": ""}, {"location": "available_software/detail/CharLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CharLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CharLS, load one of these modules using a module load command like:

                  module load CharLS/2.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CharLS/2.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/CheMPS2/", "title": "CheMPS2", "text": ""}, {"location": "available_software/detail/CheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheMPS2, load one of these modules using a module load command like:

                  module load CheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheMPS2/1.8.12-foss-2022b x x x x x x CheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/Check/", "title": "Check", "text": ""}, {"location": "available_software/detail/Check/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Check installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Check, load one of these modules using a module load command like:

                  module load Check/0.15.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Check/0.15.2-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/CheckM/", "title": "CheckM", "text": ""}, {"location": "available_software/detail/CheckM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CheckM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CheckM, load one of these modules using a module load command like:

                  module load CheckM/1.1.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CheckM/1.1.3-intel-2020a-Python-3.8.2 - x x - x x CheckM/1.1.3-foss-2021b x x x - x x CheckM/1.1.2-intel-2019b-Python-3.7.4 - x x - x x CheckM/1.1.2-foss-2019b-Python-3.7.4 - x x - x x CheckM/1.0.18-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/Chimera/", "title": "Chimera", "text": ""}, {"location": "available_software/detail/Chimera/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Chimera installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Chimera, load one of these modules using a module load command like:

                  module load Chimera/1.16-linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Chimera/1.16-linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Circlator/", "title": "Circlator", "text": ""}, {"location": "available_software/detail/Circlator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circlator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circlator, load one of these modules using a module load command like:

                  module load Circlator/1.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circlator/1.5.5-foss-2023a x x x x x x"}, {"location": "available_software/detail/Circuitscape/", "title": "Circuitscape", "text": ""}, {"location": "available_software/detail/Circuitscape/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Circuitscape installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Circuitscape, load one of these modules using a module load command like:

                  module load Circuitscape/5.12.3-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Circuitscape/5.12.3-Julia-1.7.2 x x x x x x"}, {"location": "available_software/detail/Clair3/", "title": "Clair3", "text": ""}, {"location": "available_software/detail/Clair3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clair3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clair3, load one of these modules using a module load command like:

                  module load Clair3/1.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clair3/1.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/Clang/", "title": "Clang", "text": ""}, {"location": "available_software/detail/Clang/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clang installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clang, load one of these modules using a module load command like:

                  module load Clang/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clang/16.0.6-GCCcore-12.3.0 x x x x x x Clang/15.0.5-GCCcore-11.3.0 x x x x x x Clang/13.0.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - Clang/13.0.1-GCCcore-11.3.0 x x x x x x Clang/12.0.1-GCCcore-11.2.0 x x x x x x Clang/12.0.1-GCCcore-10.3.0 x x x x x x Clang/11.0.1-gcccuda-2020b - - - - x - Clang/11.0.1-GCCcore-10.2.0 - x x x x x Clang/10.0.0-GCCcore-9.3.0 - x x - x x Clang/9.0.1-GCCcore-8.3.0 - x x - x x Clang/9.0.1-GCC-8.3.0-CUDA-10.1.243 x - - - x -"}, {"location": "available_software/detail/Clp/", "title": "Clp", "text": ""}, {"location": "available_software/detail/Clp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clp, load one of these modules using a module load command like:

                  module load Clp/1.17.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clp/1.17.9-foss-2023a x x x x x x Clp/1.17.8-foss-2022b x x x x x x Clp/1.17.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/Clustal-Omega/", "title": "Clustal-Omega", "text": ""}, {"location": "available_software/detail/Clustal-Omega/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Clustal-Omega installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Clustal-Omega, load one of these modules using a module load command like:

                  module load Clustal-Omega/1.2.4-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Clustal-Omega/1.2.4-intel-compilers-2021.2.0 - x x - x x Clustal-Omega/1.2.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/ClustalW2/", "title": "ClustalW2", "text": ""}, {"location": "available_software/detail/ClustalW2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ClustalW2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ClustalW2, load one of these modules using a module load command like:

                  module load ClustalW2/2.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ClustalW2/2.1-intel-2020a - x x - x x"}, {"location": "available_software/detail/CmdStanR/", "title": "CmdStanR", "text": ""}, {"location": "available_software/detail/CmdStanR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CmdStanR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CmdStanR, load one of these modules using a module load command like:

                  module load CmdStanR/0.7.1-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CmdStanR/0.7.1-foss-2023a-R-4.3.2 x x x x x x CmdStanR/0.5.2-foss-2022a-R-4.2.1 x x x x x x CmdStanR/0.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/CodAn/", "title": "CodAn", "text": ""}, {"location": "available_software/detail/CodAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CodAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CodAn, load one of these modules using a module load command like:

                  module load CodAn/1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CodAn/1.2-foss-2021b x x x x x x"}, {"location": "available_software/detail/CoinUtils/", "title": "CoinUtils", "text": ""}, {"location": "available_software/detail/CoinUtils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoinUtils, load one of these modules using a module load command like:

                  module load CoinUtils/2.11.10-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoinUtils/2.11.10-GCC-12.3.0 x x x x x x CoinUtils/2.11.9-GCC-12.2.0 x x x x x x CoinUtils/2.11.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/ColabFold/", "title": "ColabFold", "text": ""}, {"location": "available_software/detail/ColabFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ColabFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ColabFold, load one of these modules using a module load command like:

                  module load ColabFold/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ColabFold/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - ColabFold/1.5.2-foss-2022a - - x - x -"}, {"location": "available_software/detail/CompareM/", "title": "CompareM", "text": ""}, {"location": "available_software/detail/CompareM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CompareM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CompareM, load one of these modules using a module load command like:

                  module load CompareM/0.1.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CompareM/0.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Compress-Raw-Zlib/", "title": "Compress-Raw-Zlib", "text": ""}, {"location": "available_software/detail/Compress-Raw-Zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Compress-Raw-Zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Compress-Raw-Zlib, load one of these modules using a module load command like:

                  module load Compress-Raw-Zlib/2.202-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Compress-Raw-Zlib/2.202-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Concorde/", "title": "Concorde", "text": ""}, {"location": "available_software/detail/Concorde/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Concorde installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Concorde, load one of these modules using a module load command like:

                  module load Concorde/20031219-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Concorde/20031219-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/CoordgenLibs/", "title": "CoordgenLibs", "text": ""}, {"location": "available_software/detail/CoordgenLibs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CoordgenLibs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CoordgenLibs, load one of these modules using a module load command like:

                  module load CoordgenLibs/3.0.1-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CoordgenLibs/3.0.1-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/CopyKAT/", "title": "CopyKAT", "text": ""}, {"location": "available_software/detail/CopyKAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CopyKAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CopyKAT, load one of these modules using a module load command like:

                  module load CopyKAT/1.1.0-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CopyKAT/1.1.0-foss-2022b-R-4.2.2 x x x x x x CopyKAT/1.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Coreutils/", "title": "Coreutils", "text": ""}, {"location": "available_software/detail/Coreutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Coreutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Coreutils, load one of these modules using a module load command like:

                  module load Coreutils/8.32-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Coreutils/8.32-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/CppUnit/", "title": "CppUnit", "text": ""}, {"location": "available_software/detail/CppUnit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CppUnit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CppUnit, load one of these modules using a module load command like:

                  module load CppUnit/1.15.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CppUnit/1.15.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/CuPy/", "title": "CuPy", "text": ""}, {"location": "available_software/detail/CuPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which CuPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using CuPy, load one of these modules using a module load command like:

                  module load CuPy/8.5.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty CuPy/8.5.0-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Cufflinks/", "title": "Cufflinks", "text": ""}, {"location": "available_software/detail/Cufflinks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cufflinks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cufflinks, load one of these modules using a module load command like:

                  module load Cufflinks/20190706-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cufflinks/20190706-GCC-11.2.0 x x x x x x Cufflinks/20190706-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Cython/", "title": "Cython", "text": ""}, {"location": "available_software/detail/Cython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Cython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Cython, load one of these modules using a module load command like:

                  module load Cython/3.0.8-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Cython/3.0.8-GCCcore-12.2.0 x x x x x x Cython/3.0.7-GCCcore-12.3.0 x x x x x x Cython/0.29.33-GCCcore-11.3.0 x x x x x x Cython/0.29.22-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/DALI/", "title": "DALI", "text": ""}, {"location": "available_software/detail/DALI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DALI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DALI, load one of these modules using a module load command like:

                  module load DALI/2.1.2-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DALI/2.1.2-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/DAS_Tool/", "title": "DAS_Tool", "text": ""}, {"location": "available_software/detail/DAS_Tool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DAS_Tool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DAS_Tool, load one of these modules using a module load command like:

                  module load DAS_Tool/1.1.1-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DAS_Tool/1.1.1-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/DB/", "title": "DB", "text": ""}, {"location": "available_software/detail/DB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB, load one of these modules using a module load command like:

                  module load DB/18.1.40-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB/18.1.40-GCCcore-12.2.0 x x x x x x DB/18.1.40-GCCcore-11.3.0 x x x x x x DB/18.1.40-GCCcore-11.2.0 x x x x x x DB/18.1.40-GCCcore-10.3.0 x x x x x x DB/18.1.40-GCCcore-10.2.0 x x x x x x DB/18.1.32-GCCcore-9.3.0 x x x x x x DB/18.1.32-GCCcore-8.3.0 x x x x x x DB/18.1.32-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/DBD-mysql/", "title": "DBD-mysql", "text": ""}, {"location": "available_software/detail/DBD-mysql/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBD-mysql installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBD-mysql, load one of these modules using a module load command like:

                  module load DBD-mysql/4.050-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBD-mysql/4.050-GCC-11.3.0 x x x x x x DBD-mysql/4.050-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/DBG2OLC/", "title": "DBG2OLC", "text": ""}, {"location": "available_software/detail/DBG2OLC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBG2OLC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBG2OLC, load one of these modules using a module load command like:

                  module load DBG2OLC/20200724-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBG2OLC/20200724-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/DB_File/", "title": "DB_File", "text": ""}, {"location": "available_software/detail/DB_File/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DB_File installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DB_File, load one of these modules using a module load command like:

                  module load DB_File/1.858-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DB_File/1.858-GCCcore-11.3.0 x x x x x x DB_File/1.857-GCCcore-11.2.0 x x x x x x DB_File/1.855-GCCcore-10.2.0 - x x x x x DB_File/1.835-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/DBus/", "title": "DBus", "text": ""}, {"location": "available_software/detail/DBus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DBus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DBus, load one of these modules using a module load command like:

                  module load DBus/1.15.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DBus/1.15.4-GCCcore-12.3.0 x x x x x x DBus/1.15.2-GCCcore-12.2.0 x x x x x x DBus/1.14.0-GCCcore-11.3.0 x x x x x x DBus/1.13.18-GCCcore-11.2.0 x x x x x x DBus/1.13.18-GCCcore-10.3.0 x x x x x x DBus/1.13.18-GCCcore-10.2.0 x x x x x x DBus/1.13.12-GCCcore-9.3.0 - x x - x x DBus/1.13.12-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/DETONATE/", "title": "DETONATE", "text": ""}, {"location": "available_software/detail/DETONATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DETONATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DETONATE, load one of these modules using a module load command like:

                  module load DETONATE/1.11-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DETONATE/1.11-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/DFT-D3/", "title": "DFT-D3", "text": ""}, {"location": "available_software/detail/DFT-D3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DFT-D3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DFT-D3, load one of these modules using a module load command like:

                  module load DFT-D3/3.2.0-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DFT-D3/3.2.0-intel-compilers-2021.2.0 - x x - x x DFT-D3/3.2.0-iccifort-2020.4.304 - x x x x x"}, {"location": "available_software/detail/DIA-NN/", "title": "DIA-NN", "text": ""}, {"location": "available_software/detail/DIA-NN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIA-NN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIA-NN, load one of these modules using a module load command like:

                  module load DIA-NN/1.8.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIA-NN/1.8.1 x x x - x x"}, {"location": "available_software/detail/DIALOGUE/", "title": "DIALOGUE", "text": ""}, {"location": "available_software/detail/DIALOGUE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIALOGUE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIALOGUE, load one of these modules using a module load command like:

                  module load DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIALOGUE/1.0-20230228-foss-2021b-R-4.2.0 x x x x x x"}, {"location": "available_software/detail/DIAMOND/", "title": "DIAMOND", "text": ""}, {"location": "available_software/detail/DIAMOND/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIAMOND installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIAMOND, load one of these modules using a module load command like:

                  module load DIAMOND/2.1.8-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIAMOND/2.1.8-GCC-12.3.0 x x x x x x DIAMOND/2.1.8-GCC-12.2.0 x x x x x x DIAMOND/2.1.0-GCC-11.3.0 x x x x x x DIAMOND/2.0.13-GCC-11.2.0 x x x x x x DIAMOND/2.0.11-GCC-10.3.0 - x x - x x DIAMOND/2.0.7-GCC-10.2.0 x x x x x x DIAMOND/2.0.6-GCC-10.2.0 - x - - - - DIAMOND/0.9.30-iccifort-2019.5.281 - x x - x x DIAMOND/0.9.30-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/DIANA/", "title": "DIANA", "text": ""}, {"location": "available_software/detail/DIANA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIANA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIANA, load one of these modules using a module load command like:

                  module load DIANA/10.5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIANA/10.5 - x x - x - DIANA/10.4 - - x - x -"}, {"location": "available_software/detail/DIRAC/", "title": "DIRAC", "text": ""}, {"location": "available_software/detail/DIRAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DIRAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DIRAC, load one of these modules using a module load command like:

                  module load DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DIRAC/19.0-intel-2020a-Python-2.7.18-mpi-int64 - x x - x - DIRAC/19.0-intel-2020a-Python-2.7.18-int64 - x x - x x"}, {"location": "available_software/detail/DL_POLY_Classic/", "title": "DL_POLY_Classic", "text": ""}, {"location": "available_software/detail/DL_POLY_Classic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DL_POLY_Classic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DL_POLY_Classic, load one of these modules using a module load command like:

                  module load DL_POLY_Classic/1.10-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DL_POLY_Classic/1.10-intel-2019b - x x - x x DL_POLY_Classic/1.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/DMCfun/", "title": "DMCfun", "text": ""}, {"location": "available_software/detail/DMCfun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DMCfun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DMCfun, load one of these modules using a module load command like:

                  module load DMCfun/1.3.0-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DMCfun/1.3.0-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/DOLFIN/", "title": "DOLFIN", "text": ""}, {"location": "available_software/detail/DOLFIN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DOLFIN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DOLFIN, load one of these modules using a module load command like:

                  module load DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DOLFIN/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/DRAGMAP/", "title": "DRAGMAP", "text": ""}, {"location": "available_software/detail/DRAGMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DRAGMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DRAGMAP, load one of these modules using a module load command like:

                  module load DRAGMAP/1.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DRAGMAP/1.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/DROP/", "title": "DROP", "text": ""}, {"location": "available_software/detail/DROP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DROP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DROP, load one of these modules using a module load command like:

                  module load DROP/1.1.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DROP/1.1.0-foss-2020b-R-4.0.3 - x x x x x DROP/1.0.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/DUBStepR/", "title": "DUBStepR", "text": ""}, {"location": "available_software/detail/DUBStepR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DUBStepR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DUBStepR, load one of these modules using a module load command like:

                  module load DUBStepR/1.2.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DUBStepR/1.2.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/Dakota/", "title": "Dakota", "text": ""}, {"location": "available_software/detail/Dakota/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dakota installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dakota, load one of these modules using a module load command like:

                  module load Dakota/6.16.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dakota/6.16.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Dalton/", "title": "Dalton", "text": ""}, {"location": "available_software/detail/Dalton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dalton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dalton, load one of these modules using a module load command like:

                  module load Dalton/2020.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dalton/2020.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/DeepLoc/", "title": "DeepLoc", "text": ""}, {"location": "available_software/detail/DeepLoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DeepLoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DeepLoc, load one of these modules using a module load command like:

                  module load DeepLoc/2.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DeepLoc/2.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/Delly/", "title": "Delly", "text": ""}, {"location": "available_software/detail/Delly/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Delly installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Delly, load one of these modules using a module load command like:

                  module load Delly/0.8.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Delly/0.8.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/DendroPy/", "title": "DendroPy", "text": ""}, {"location": "available_software/detail/DendroPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DendroPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DendroPy, load one of these modules using a module load command like:

                  module load DendroPy/4.6.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.3.0 x x x x x x DendroPy/4.5.2-GCCcore-11.2.0 x x x - x x DendroPy/4.5.2-GCCcore-10.2.0-Python-2.7.18 - x x x x x DendroPy/4.5.2-GCCcore-10.2.0 - x x x x x DendroPy/4.4.0-GCCcore-9.3.0 - x x - x x DendroPy/4.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/DensPart/", "title": "DensPart", "text": ""}, {"location": "available_software/detail/DensPart/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DensPart installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DensPart, load one of these modules using a module load command like:

                  module load DensPart/20220603-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DensPart/20220603-intel-2022a x x x x x x"}, {"location": "available_software/detail/Deprecated/", "title": "Deprecated", "text": ""}, {"location": "available_software/detail/Deprecated/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Deprecated installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Deprecated, load one of these modules using a module load command like:

                  module load Deprecated/1.2.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Deprecated/1.2.13-foss-2022a x x x x x x Deprecated/1.2.13-foss-2021a x x x x x x"}, {"location": "available_software/detail/DiCE-ML/", "title": "DiCE-ML", "text": ""}, {"location": "available_software/detail/DiCE-ML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DiCE-ML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DiCE-ML, load one of these modules using a module load command like:

                  module load DiCE-ML/0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DiCE-ML/0.9-foss-2022a x x x x x x"}, {"location": "available_software/detail/Dice/", "title": "Dice", "text": ""}, {"location": "available_software/detail/Dice/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dice installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dice, load one of these modules using a module load command like:

                  module load Dice/20240101-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dice/20240101-foss-2022b x x x x x x Dice/20221025-foss-2022a - x x x x x"}, {"location": "available_software/detail/DoubletFinder/", "title": "DoubletFinder", "text": ""}, {"location": "available_software/detail/DoubletFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DoubletFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DoubletFinder, load one of these modules using a module load command like:

                  module load DoubletFinder/2.0.3-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DoubletFinder/2.0.3-foss-2020a-R-4.0.0 - - x - x - DoubletFinder/2.0.3-20230819-foss-2022b-R-4.2.2 x x x x x x DoubletFinder/2.0.3-20230131-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Doxygen/", "title": "Doxygen", "text": ""}, {"location": "available_software/detail/Doxygen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Doxygen, load one of these modules using a module load command like:

                  module load Doxygen/1.9.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x Doxygen/1.9.4-GCCcore-11.3.0 x x x x x x Doxygen/1.9.1-GCCcore-11.2.0 x x x x x x Doxygen/1.9.1-GCCcore-10.3.0 x x x x x x Doxygen/1.8.20-GCCcore-10.2.0 x x x x x x Doxygen/1.8.17-GCCcore-9.3.0 x x x x x x Doxygen/1.8.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Dsuite/", "title": "Dsuite", "text": ""}, {"location": "available_software/detail/Dsuite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Dsuite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Dsuite, load one of these modules using a module load command like:

                  module load Dsuite/20210718-intel-compilers-2021.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Dsuite/20210718-intel-compilers-2021.2.0 - x x - x x"}, {"location": "available_software/detail/DualSPHysics/", "title": "DualSPHysics", "text": ""}, {"location": "available_software/detail/DualSPHysics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DualSPHysics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DualSPHysics, load one of these modules using a module load command like:

                  module load DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DualSPHysics/5.0.175-GCC-11.2.0-CUDA-11.4.1 x - - - x -"}, {"location": "available_software/detail/DyMat/", "title": "DyMat", "text": ""}, {"location": "available_software/detail/DyMat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which DyMat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using DyMat, load one of these modules using a module load command like:

                  module load DyMat/0.7-foss-2021b-2020-12-12\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty DyMat/0.7-foss-2021b-2020-12-12 x x x - x x"}, {"location": "available_software/detail/EDirect/", "title": "EDirect", "text": ""}, {"location": "available_software/detail/EDirect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EDirect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EDirect, load one of these modules using a module load command like:

                  module load EDirect/20.5.20231006-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EDirect/20.5.20231006-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ELPA/", "title": "ELPA", "text": ""}, {"location": "available_software/detail/ELPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ELPA, load one of these modules using a module load command like:

                  module load ELPA/2021.05.001-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ELPA/2021.05.001-intel-2021b x x x - x x ELPA/2021.05.001-intel-2021a - x x - x x ELPA/2021.05.001-foss-2021b x x x - x x ELPA/2020.11.001-intel-2020b - x x x x x ELPA/2019.11.001-intel-2019b - x x - x x ELPA/2019.11.001-foss-2019b - x x - x x"}, {"location": "available_software/detail/EMBOSS/", "title": "EMBOSS", "text": ""}, {"location": "available_software/detail/EMBOSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EMBOSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EMBOSS, load one of these modules using a module load command like:

                  module load EMBOSS/6.6.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EMBOSS/6.6.0-foss-2021b x x x - x x EMBOSS/6.6.0-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/ESM-2/", "title": "ESM-2", "text": ""}, {"location": "available_software/detail/ESM-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESM-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESM-2, load one of these modules using a module load command like:

                  module load ESM-2/2.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESM-2/2.0.0-foss-2022b x x x x x x ESM-2/2.0.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/ESMF/", "title": "ESMF", "text": ""}, {"location": "available_software/detail/ESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMF, load one of these modules using a module load command like:

                  module load ESMF/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMF/8.2.0-foss-2021b x x x - x x ESMF/8.1.1-foss-2021a - x x - x x ESMF/8.0.1-intel-2020b - x x x x x ESMF/8.0.1-foss-2020a - x x - x x ESMF/8.0.0-intel-2019b - x x - x x"}, {"location": "available_software/detail/ESMPy/", "title": "ESMPy", "text": ""}, {"location": "available_software/detail/ESMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ESMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ESMPy, load one of these modules using a module load command like:

                  module load ESMPy/8.0.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ESMPy/8.0.1-intel-2020b - x x - x x ESMPy/8.0.1-foss-2020a-Python-3.8.2 - x x - x x ESMPy/8.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ETE/", "title": "ETE", "text": ""}, {"location": "available_software/detail/ETE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ETE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ETE, load one of these modules using a module load command like:

                  module load ETE/3.1.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ETE/3.1.3-foss-2022b x x x x x x ETE/3.1.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/EUKulele/", "title": "EUKulele", "text": ""}, {"location": "available_software/detail/EUKulele/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EUKulele installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EUKulele, load one of these modules using a module load command like:

                  module load EUKulele/2.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EUKulele/2.0.6-foss-2022a x x x x x x EUKulele/1.0.4-foss-2020b - x x - x x"}, {"location": "available_software/detail/EasyBuild/", "title": "EasyBuild", "text": ""}, {"location": "available_software/detail/EasyBuild/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EasyBuild, load one of these modules using a module load command like:

                  module load EasyBuild/4.9.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EasyBuild/4.9.0 x x x x x x EasyBuild/4.8.2 x x x x x x EasyBuild/4.8.1 x x x x x x EasyBuild/4.8.0 x x x x x x EasyBuild/4.7.1 x x x x x x EasyBuild/4.7.0 x x x x x x EasyBuild/4.6.2 x x x x x x EasyBuild/4.6.1 x x x x x x EasyBuild/4.6.0 x x x x x x EasyBuild/4.5.5 x x x x x x EasyBuild/4.5.4 x x x x x x EasyBuild/4.5.3 x x x x x x EasyBuild/4.5.2 x x x x x x EasyBuild/4.5.1 x x x x x x EasyBuild/4.5.0 x x x x x x EasyBuild/4.4.2 x x x x x x EasyBuild/4.4.1 x x x x x x EasyBuild/4.4.0 x x x x x x EasyBuild/4.3.4 x x x x x x EasyBuild/4.3.3 x x x x x x EasyBuild/4.3.2 x x x x x x EasyBuild/4.3.1 x x x x x x EasyBuild/4.3.0 x x x x x x EasyBuild/4.2.2 x x x x x x EasyBuild/4.2.1 x x x x x x EasyBuild/4.2.0 x x x x x x"}, {"location": "available_software/detail/Eigen/", "title": "Eigen", "text": ""}, {"location": "available_software/detail/Eigen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Eigen, load one of these modules using a module load command like:

                  module load Eigen/3.4.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Eigen/3.4.0-GCCcore-13.2.0 x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x Eigen/3.4.0-GCCcore-11.3.0 x x x x x x Eigen/3.4.0-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-11.2.0 x x x x x x Eigen/3.3.9-GCCcore-10.3.0 x x x x x x Eigen/3.3.9-GCCcore-10.2.0 - - x x x x Eigen/3.3.8-GCCcore-10.2.0 x x x x x x Eigen/3.3.7-GCCcore-9.3.0 x x x x x x Eigen/3.3.7 x x x x x x"}, {"location": "available_software/detail/Elk/", "title": "Elk", "text": ""}, {"location": "available_software/detail/Elk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Elk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Elk, load one of these modules using a module load command like:

                  module load Elk/7.0.12-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Elk/7.0.12-foss-2020b - x x x x x"}, {"location": "available_software/detail/EpiSCORE/", "title": "EpiSCORE", "text": ""}, {"location": "available_software/detail/EpiSCORE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which EpiSCORE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using EpiSCORE, load one of these modules using a module load command like:

                  module load EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty EpiSCORE/0.9.5-20220621-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/Excel-Writer-XLSX/", "title": "Excel-Writer-XLSX", "text": ""}, {"location": "available_software/detail/Excel-Writer-XLSX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Excel-Writer-XLSX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Excel-Writer-XLSX, load one of these modules using a module load command like:

                  module load Excel-Writer-XLSX/1.09-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Excel-Writer-XLSX/1.09-foss-2020b - x x x x x"}, {"location": "available_software/detail/Exonerate/", "title": "Exonerate", "text": ""}, {"location": "available_software/detail/Exonerate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Exonerate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Exonerate, load one of these modules using a module load command like:

                  module load Exonerate/2.4.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Exonerate/2.4.0-iccifort-2019.5.281 - x x - x x Exonerate/2.4.0-GCC-12.2.0 x x x x x x Exonerate/2.4.0-GCC-11.2.0 x x x x x x Exonerate/2.4.0-GCC-10.2.0 x x x - x x Exonerate/2.4.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ExtremeLy/", "title": "ExtremeLy", "text": ""}, {"location": "available_software/detail/ExtremeLy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ExtremeLy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ExtremeLy, load one of these modules using a module load command like:

                  module load ExtremeLy/2.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ExtremeLy/2.3.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/FALCON/", "title": "FALCON", "text": ""}, {"location": "available_software/detail/FALCON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FALCON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FALCON, load one of these modules using a module load command like:

                  module load FALCON/1.8.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FALCON/1.8.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/FASTA/", "title": "FASTA", "text": ""}, {"location": "available_software/detail/FASTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTA, load one of these modules using a module load command like:

                  module load FASTA/36.3.8i-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTA/36.3.8i-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/FASTX-Toolkit/", "title": "FASTX-Toolkit", "text": ""}, {"location": "available_software/detail/FASTX-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FASTX-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FASTX-Toolkit, load one of these modules using a module load command like:

                  module load FASTX-Toolkit/0.0.14-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FASTX-Toolkit/0.0.14-GCC-11.3.0 x x x x x x FASTX-Toolkit/0.0.14-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/FDS/", "title": "FDS", "text": ""}, {"location": "available_software/detail/FDS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FDS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FDS, load one of these modules using a module load command like:

                  module load FDS/6.8.0-intel-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FDS/6.8.0-intel-2022b x x x x x x FDS/6.7.9-intel-2022a x x x - x x FDS/6.7.7-intel-2021b x x x - x x FDS/6.7.6-intel-2020b - x x x x x FDS/6.7.5-intel-2020b - - x - x - FDS/6.7.5-intel-2020a - x x - x x FDS/6.7.4-intel-2020a - x x - x x"}, {"location": "available_software/detail/FEniCS/", "title": "FEniCS", "text": ""}, {"location": "available_software/detail/FEniCS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FEniCS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FEniCS, load one of these modules using a module load command like:

                  module load FEniCS/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FEniCS/2019.1.0-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FFAVES/", "title": "FFAVES", "text": ""}, {"location": "available_software/detail/FFAVES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFAVES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFAVES, load one of these modules using a module load command like:

                  module load FFAVES/2022.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFAVES/2022.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/FFC/", "title": "FFC", "text": ""}, {"location": "available_software/detail/FFC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFC, load one of these modules using a module load command like:

                  module load FFC/2019.1.0.post0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFC/2019.1.0.post0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FFTW.MPI/", "title": "FFTW.MPI", "text": ""}, {"location": "available_software/detail/FFTW.MPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW.MPI, load one of these modules using a module load command like:

                  module load FFTW.MPI/3.3.10-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW.MPI/3.3.10-gompi-2023b x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x FFTW.MPI/3.3.10-gompi-2022a x x x x x x"}, {"location": "available_software/detail/FFTW/", "title": "FFTW", "text": ""}, {"location": "available_software/detail/FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFTW, load one of these modules using a module load command like:

                  module load FFTW/3.3.10-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFTW/3.3.10-gompi-2021b x x x x x x FFTW/3.3.10-GCC-13.2.0 x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x FFTW/3.3.10-GCC-11.3.0 x x x x x x FFTW/3.3.9-intel-2021a - x x - x x FFTW/3.3.9-gompi-2021a x x x x x x FFTW/3.3.8-iomkl-2020a - x - - - - FFTW/3.3.8-intelcuda-2020b - - - - x - FFTW/3.3.8-intel-2020b - x x x x x FFTW/3.3.8-intel-2020a - x x - x x FFTW/3.3.8-intel-2019b - x x - x x FFTW/3.3.8-iimpi-2020b - x - - - - FFTW/3.3.8-gompic-2020b x - - - x - FFTW/3.3.8-gompi-2020b x x x x x x FFTW/3.3.8-gompi-2020a - x x - x x FFTW/3.3.8-gompi-2019b x x x - x x"}, {"location": "available_software/detail/FFmpeg/", "title": "FFmpeg", "text": ""}, {"location": "available_software/detail/FFmpeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FFmpeg, load one of these modules using a module load command like:

                  module load FFmpeg/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FFmpeg/6.0-GCCcore-12.3.0 x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x FFmpeg/4.4.2-GCCcore-11.3.0 x x x x x x FFmpeg/4.3.2-GCCcore-11.2.0 x x x x x x FFmpeg/4.3.2-GCCcore-10.3.0 x x x x x x FFmpeg/4.3.1-GCCcore-10.2.0 x x x x x x FFmpeg/4.2.2-GCCcore-9.3.0 - x x - x x FFmpeg/4.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FIAT/", "title": "FIAT", "text": ""}, {"location": "available_software/detail/FIAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIAT, load one of these modules using a module load command like:

                  module load FIAT/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIAT/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FIGARO/", "title": "FIGARO", "text": ""}, {"location": "available_software/detail/FIGARO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FIGARO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FIGARO, load one of these modules using a module load command like:

                  module load FIGARO/1.1.2-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FIGARO/1.1.2-intel-2020b - - x - x x"}, {"location": "available_software/detail/FLAC/", "title": "FLAC", "text": ""}, {"location": "available_software/detail/FLAC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAC, load one of these modules using a module load command like:

                  module load FLAC/1.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAC/1.4.2-GCCcore-12.3.0 x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x FLAC/1.3.4-GCCcore-11.3.0 x x x x x x FLAC/1.3.3-GCCcore-11.2.0 x x x x x x FLAC/1.3.3-GCCcore-10.3.0 x x x x x x FLAC/1.3.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/FLAIR/", "title": "FLAIR", "text": ""}, {"location": "available_software/detail/FLAIR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLAIR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLAIR, load one of these modules using a module load command like:

                  module load FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLAIR/1.5.1-20200630-foss-2019b-Python-3.7.4 - x x - x - FLAIR/1.5-foss-2019b-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/FLANN/", "title": "FLANN", "text": ""}, {"location": "available_software/detail/FLANN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLANN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLANN, load one of these modules using a module load command like:

                  module load FLANN/1.9.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLANN/1.9.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/FLASH/", "title": "FLASH", "text": ""}, {"location": "available_software/detail/FLASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLASH, load one of these modules using a module load command like:

                  module load FLASH/2.2.00-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLASH/2.2.00-foss-2020b - x x x x x FLASH/2.2.00-GCC-11.2.0 x x x - x x FLASH/1.2.11-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/FLTK/", "title": "FLTK", "text": ""}, {"location": "available_software/detail/FLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLTK, load one of these modules using a module load command like:

                  module load FLTK/1.3.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLTK/1.3.5-GCCcore-10.2.0 - x x x x x FLTK/1.3.5-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/FLUENT/", "title": "FLUENT", "text": ""}, {"location": "available_software/detail/FLUENT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FLUENT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FLUENT, load one of these modules using a module load command like:

                  module load FLUENT/2023R1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FLUENT/2023R1 x x x x x x FLUENT/2022R1 - x x - x x FLUENT/2021R2 x x x x x x FLUENT/2019R3 - x x - x x"}, {"location": "available_software/detail/FMM3D/", "title": "FMM3D", "text": ""}, {"location": "available_software/detail/FMM3D/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMM3D installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMM3D, load one of these modules using a module load command like:

                  module load FMM3D/20211018-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMM3D/20211018-foss-2020b - x x x x x"}, {"location": "available_software/detail/FMPy/", "title": "FMPy", "text": ""}, {"location": "available_software/detail/FMPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FMPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FMPy, load one of these modules using a module load command like:

                  module load FMPy/0.3.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FMPy/0.3.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/FSL/", "title": "FSL", "text": ""}, {"location": "available_software/detail/FSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FSL, load one of these modules using a module load command like:

                  module load FSL/6.0.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FSL/6.0.7.2 x x x x x x FSL/6.0.5.1-foss-2021a - x x - x x FSL/6.0.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FabIO/", "title": "FabIO", "text": ""}, {"location": "available_software/detail/FabIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FabIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FabIO, load one of these modules using a module load command like:

                  module load FabIO/0.11.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FabIO/0.11.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Faiss/", "title": "Faiss", "text": ""}, {"location": "available_software/detail/Faiss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Faiss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Faiss, load one of these modules using a module load command like:

                  module load Faiss/1.7.2-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Faiss/1.7.2-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/FastANI/", "title": "FastANI", "text": ""}, {"location": "available_software/detail/FastANI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastANI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastANI, load one of these modules using a module load command like:

                  module load FastANI/1.34-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastANI/1.34-GCC-12.3.0 x x x x x x FastANI/1.33-intel-compilers-2021.4.0 x x x - x x FastANI/1.33-iccifort-2020.4.304 - x x x x x FastANI/1.33-GCC-11.2.0 x x x - x x FastANI/1.33-GCC-10.2.0 - x x - x - FastANI/1.31-iccifort-2020.1.217 - x x - x x FastANI/1.3-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/FastME/", "title": "FastME", "text": ""}, {"location": "available_software/detail/FastME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastME, load one of these modules using a module load command like:

                  module load FastME/2.1.6.3-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastME/2.1.6.3-GCC-12.3.0 x x x x x x FastME/2.1.6.1-iccifort-2019.5.281 - x x - x x FastME/2.1.6.1-GCC-10.2.0 - x x x x x FastME/2.1.6.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastQC/", "title": "FastQC", "text": ""}, {"location": "available_software/detail/FastQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQC, load one of these modules using a module load command like:

                  module load FastQC/0.11.9-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQC/0.11.9-Java-11 x x x x x x"}, {"location": "available_software/detail/FastQ_Screen/", "title": "FastQ_Screen", "text": ""}, {"location": "available_software/detail/FastQ_Screen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastQ_Screen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastQ_Screen, load one of these modules using a module load command like:

                  module load FastQ_Screen/0.14.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastQ_Screen/0.14.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/FastTree/", "title": "FastTree", "text": ""}, {"location": "available_software/detail/FastTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastTree, load one of these modules using a module load command like:

                  module load FastTree/2.1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastTree/2.1.11-GCCcore-12.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.3.0 x x x x x x FastTree/2.1.11-GCCcore-11.2.0 x x x - x x FastTree/2.1.11-GCCcore-10.2.0 - x x x x x FastTree/2.1.11-GCCcore-9.3.0 - x x - x x FastTree/2.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/FastViromeExplorer/", "title": "FastViromeExplorer", "text": ""}, {"location": "available_software/detail/FastViromeExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FastViromeExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FastViromeExplorer, load one of these modules using a module load command like:

                  module load FastViromeExplorer/20180422-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FastViromeExplorer/20180422-foss-2019b - x x - x x"}, {"location": "available_software/detail/Fastaq/", "title": "Fastaq", "text": ""}, {"location": "available_software/detail/Fastaq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fastaq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fastaq, load one of these modules using a module load command like:

                  module load Fastaq/3.17.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fastaq/3.17.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Fiji/", "title": "Fiji", "text": ""}, {"location": "available_software/detail/Fiji/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiji installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiji, load one of these modules using a module load command like:

                  module load Fiji/2.9.0-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiji/2.9.0-Java-1.8 x x x - x x"}, {"location": "available_software/detail/Filtlong/", "title": "Filtlong", "text": ""}, {"location": "available_software/detail/Filtlong/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Filtlong installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Filtlong, load one of these modules using a module load command like:

                  module load Filtlong/0.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Filtlong/0.2.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Fiona/", "title": "Fiona", "text": ""}, {"location": "available_software/detail/Fiona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Fiona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Fiona, load one of these modules using a module load command like:

                  module load Fiona/1.9.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Fiona/1.9.5-foss-2023a x x x x x x Fiona/1.9.2-foss-2022b x x x x x x Fiona/1.8.21-foss-2022a x x x x x x Fiona/1.8.21-foss-2021b x x x x x x Fiona/1.8.20-intel-2020b - x x - x x Fiona/1.8.20-foss-2020b - x x x x x Fiona/1.8.16-foss-2020a-Python-3.8.2 - x x - x x Fiona/1.8.13-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Flask/", "title": "Flask", "text": ""}, {"location": "available_software/detail/Flask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flask, load one of these modules using a module load command like:

                  module load Flask/2.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flask/2.2.2-GCCcore-11.3.0 x x x x x x Flask/2.0.2-GCCcore-11.2.0 x x x - x x Flask/1.1.4-GCCcore-10.3.0 x x x x x x Flask/1.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FlexiBLAS/", "title": "FlexiBLAS", "text": ""}, {"location": "available_software/detail/FlexiBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FlexiBLAS, load one of these modules using a module load command like:

                  module load FlexiBLAS/3.3.1-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x FlexiBLAS/3.2.0-GCC-11.3.0 x x x x x x FlexiBLAS/3.0.4-GCC-11.2.0 x x x x x x FlexiBLAS/3.0.4-GCC-10.3.0 x x x x x x"}, {"location": "available_software/detail/Flye/", "title": "Flye", "text": ""}, {"location": "available_software/detail/Flye/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Flye installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Flye, load one of these modules using a module load command like:

                  module load Flye/2.9.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Flye/2.9.2-GCC-11.3.0 x x x x x x Flye/2.9-intel-compilers-2021.2.0 - x x - x x Flye/2.9-GCC-10.3.0 x x x x x - Flye/2.8.3-iccifort-2020.4.304 - x x - x - Flye/2.8.3-GCC-10.2.0 - x x - x - Flye/2.8.1-intel-2020a-Python-3.8.2 - x x - x x Flye/2.7-intel-2019b-Python-3.7.4 - x - - - - Flye/2.6-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FragGeneScan/", "title": "FragGeneScan", "text": ""}, {"location": "available_software/detail/FragGeneScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FragGeneScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FragGeneScan, load one of these modules using a module load command like:

                  module load FragGeneScan/1.31-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FragGeneScan/1.31-GCCcore-11.3.0 x x x x x x FragGeneScan/1.31-GCCcore-11.2.0 x x x - x x FragGeneScan/1.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/FreeBarcodes/", "title": "FreeBarcodes", "text": ""}, {"location": "available_software/detail/FreeBarcodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeBarcodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeBarcodes, load one of these modules using a module load command like:

                  module load FreeBarcodes/3.0.a5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeBarcodes/3.0.a5-foss-2021b x x x - x x"}, {"location": "available_software/detail/FreeFEM/", "title": "FreeFEM", "text": ""}, {"location": "available_software/detail/FreeFEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeFEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeFEM, load one of these modules using a module load command like:

                  module load FreeFEM/4.5-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeFEM/4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/FreeImage/", "title": "FreeImage", "text": ""}, {"location": "available_software/detail/FreeImage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeImage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeImage, load one of these modules using a module load command like:

                  module load FreeImage/3.18.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeImage/3.18.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/FreeSurfer/", "title": "FreeSurfer", "text": ""}, {"location": "available_software/detail/FreeSurfer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeSurfer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeSurfer, load one of these modules using a module load command like:

                  module load FreeSurfer/7.3.2-centos8_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeSurfer/7.3.2-centos8_x86_64 x x x - x x FreeSurfer/7.2.0-centos8_x86_64 - x x - x x"}, {"location": "available_software/detail/FreeXL/", "title": "FreeXL", "text": ""}, {"location": "available_software/detail/FreeXL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FreeXL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FreeXL, load one of these modules using a module load command like:

                  module load FreeXL/1.0.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FreeXL/1.0.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/FriBidi/", "title": "FriBidi", "text": ""}, {"location": "available_software/detail/FriBidi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FriBidi, load one of these modules using a module load command like:

                  module load FriBidi/1.0.12-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x FriBidi/1.0.12-GCCcore-11.3.0 x x x x x x FriBidi/1.0.10-GCCcore-11.2.0 x x x x x x FriBidi/1.0.10-GCCcore-10.3.0 x x x x x x FriBidi/1.0.10-GCCcore-10.2.0 x x x x x x FriBidi/1.0.9-GCCcore-9.3.0 - x x - x x FriBidi/1.0.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/FuSeq/", "title": "FuSeq", "text": ""}, {"location": "available_software/detail/FuSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FuSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FuSeq, load one of these modules using a module load command like:

                  module load FuSeq/1.1.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FuSeq/1.1.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/FusionCatcher/", "title": "FusionCatcher", "text": ""}, {"location": "available_software/detail/FusionCatcher/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which FusionCatcher installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using FusionCatcher, load one of these modules using a module load command like:

                  module load FusionCatcher/1.30-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty FusionCatcher/1.30-foss-2019b-Python-2.7.16 - x x - x x FusionCatcher/1.20-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/GAPPadder/", "title": "GAPPadder", "text": ""}, {"location": "available_software/detail/GAPPadder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GAPPadder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GAPPadder, load one of these modules using a module load command like:

                  module load GAPPadder/20170601-foss-2021b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GAPPadder/20170601-foss-2021b-Python-2.7.18 x x x x x x"}, {"location": "available_software/detail/GATB-Core/", "title": "GATB-Core", "text": ""}, {"location": "available_software/detail/GATB-Core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATB-Core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATB-Core, load one of these modules using a module load command like:

                  module load GATB-Core/1.4.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATB-Core/1.4.2-gompi-2022a x x x x x x"}, {"location": "available_software/detail/GATE/", "title": "GATE", "text": ""}, {"location": "available_software/detail/GATE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATE, load one of these modules using a module load command like:

                  module load GATE/9.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATE/9.2-foss-2022a x x x x x x GATE/9.2-foss-2021b x x x x x x GATE/9.1-foss-2021b x x x x x x GATE/9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GATK/", "title": "GATK", "text": ""}, {"location": "available_software/detail/GATK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GATK, load one of these modules using a module load command like:

                  module load GATK/4.4.0.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GATK/4.4.0.0-GCCcore-12.3.0-Java-17 x x x x x x GATK/4.3.0.0-GCCcore-11.3.0-Java-11 x x x x x x GATK/4.2.0.0-GCCcore-10.2.0-Java-11 - x x x x x GATK/4.1.8.1-GCCcore-9.3.0-Java-1.8 - x x - x x"}, {"location": "available_software/detail/GBprocesS/", "title": "GBprocesS", "text": ""}, {"location": "available_software/detail/GBprocesS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GBprocesS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GBprocesS, load one of these modules using a module load command like:

                  module load GBprocesS/4.0.0.post1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GBprocesS/4.0.0.post1-foss-2022a x x x x x x GBprocesS/2.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GCC/", "title": "GCC", "text": ""}, {"location": "available_software/detail/GCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCC, load one of these modules using a module load command like:

                  module load GCC/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCC/13.2.0 x x x x x x GCC/12.3.0 x x x x x x GCC/12.2.0 x x x x x x GCC/11.3.0 x x x x x x GCC/11.2.0 x x x x x x GCC/10.3.0 x x x x x x GCC/10.2.0 x x x x x x GCC/9.3.0 - x x x x x GCC/8.3.0 x x x x x x"}, {"location": "available_software/detail/GCCcore/", "title": "GCCcore", "text": ""}, {"location": "available_software/detail/GCCcore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GCCcore, load one of these modules using a module load command like:

                  module load GCCcore/13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GCCcore/13.2.0 x x x x x x GCCcore/12.3.0 x x x x x x GCCcore/12.2.0 x x x x x x GCCcore/11.3.0 x x x x x x GCCcore/11.2.0 x x x x x x GCCcore/10.3.0 x x x x x x GCCcore/10.2.0 x x x x x x GCCcore/9.3.0 x x x x x x GCCcore/8.3.0 x x x x x x GCCcore/8.2.0 - x - - - -"}, {"location": "available_software/detail/GConf/", "title": "GConf", "text": ""}, {"location": "available_software/detail/GConf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GConf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GConf, load one of these modules using a module load command like:

                  module load GConf/3.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GConf/3.2.6-GCCcore-11.2.0 x x x x x x GConf/3.2.6-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GDAL/", "title": "GDAL", "text": ""}, {"location": "available_software/detail/GDAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDAL, load one of these modules using a module load command like:

                  module load GDAL/3.7.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDAL/3.7.1-foss-2023a x x x x x x GDAL/3.6.2-foss-2022b x x x x x x GDAL/3.5.0-foss-2022a x x x x x x GDAL/3.3.2-foss-2021b x x x x x x GDAL/3.3.0-foss-2021a x x x x x x GDAL/3.2.1-intel-2020b - x x - x x GDAL/3.2.1-fosscuda-2020b - - - - x - GDAL/3.2.1-foss-2020b - x x x x x GDAL/3.0.4-foss-2020a-Python-3.8.2 - x x - x x GDAL/3.0.2-intel-2019b-Python-3.7.4 - - x - x x GDAL/3.0.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDB/", "title": "GDB", "text": ""}, {"location": "available_software/detail/GDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDB, load one of these modules using a module load command like:

                  module load GDB/9.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDB/9.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GDCM/", "title": "GDCM", "text": ""}, {"location": "available_software/detail/GDCM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDCM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDCM, load one of these modules using a module load command like:

                  module load GDCM/3.0.21-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDCM/3.0.21-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/GDGraph/", "title": "GDGraph", "text": ""}, {"location": "available_software/detail/GDGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDGraph, load one of these modules using a module load command like:

                  module load GDGraph/1.56-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDGraph/1.56-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GDRCopy/", "title": "GDRCopy", "text": ""}, {"location": "available_software/detail/GDRCopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GDRCopy, load one of these modules using a module load command like:

                  module load GDRCopy/2.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GDRCopy/2.3.1-GCCcore-12.3.0 x - x - x - GDRCopy/2.3-GCCcore-11.3.0 x x x - x x GDRCopy/2.3-GCCcore-11.2.0 x x x - x x GDRCopy/2.2-GCCcore-10.3.0 x - - - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - GDRCopy/2.1-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x"}, {"location": "available_software/detail/GEGL/", "title": "GEGL", "text": ""}, {"location": "available_software/detail/GEGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEGL, load one of these modules using a module load command like:

                  module load GEGL/0.4.30-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEGL/0.4.30-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GEOS/", "title": "GEOS", "text": ""}, {"location": "available_software/detail/GEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GEOS, load one of these modules using a module load command like:

                  module load GEOS/3.12.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GEOS/3.12.0-GCC-12.3.0 x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x GEOS/3.10.3-GCC-11.3.0 x x x x x x GEOS/3.9.1-iccifort-2020.4.304 - x x x x x GEOS/3.9.1-GCC-11.2.0 x x x x x x GEOS/3.9.1-GCC-10.3.0 x x x x x x GEOS/3.9.1-GCC-10.2.0 - x x x x x GEOS/3.8.1-GCC-9.3.0-Python-3.8.2 - x x - x x GEOS/3.8.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x GEOS/3.8.0-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GFF3-toolkit/", "title": "GFF3-toolkit", "text": ""}, {"location": "available_software/detail/GFF3-toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GFF3-toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GFF3-toolkit, load one of these modules using a module load command like:

                  module load GFF3-toolkit/2.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GFF3-toolkit/2.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/GIMP/", "title": "GIMP", "text": ""}, {"location": "available_software/detail/GIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GIMP, load one of these modules using a module load command like:

                  module load GIMP/2.10.24-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GIMP/2.10.24-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/GL2PS/", "title": "GL2PS", "text": ""}, {"location": "available_software/detail/GL2PS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GL2PS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GL2PS, load one of these modules using a module load command like:

                  module load GL2PS/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GL2PS/1.4.2-GCCcore-11.3.0 x x x x x x GL2PS/1.4.2-GCCcore-11.2.0 x x x x x x GL2PS/1.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLFW/", "title": "GLFW", "text": ""}, {"location": "available_software/detail/GLFW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLFW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLFW, load one of these modules using a module load command like:

                  module load GLFW/3.3.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLFW/3.3.8-GCCcore-12.3.0 x x x x x x GLFW/3.3.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/GLIMPSE/", "title": "GLIMPSE", "text": ""}, {"location": "available_software/detail/GLIMPSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLIMPSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLIMPSE, load one of these modules using a module load command like:

                  module load GLIMPSE/2.0.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLIMPSE/2.0.0-GCC-12.2.0 x x x x x x GLIMPSE/2.0.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GLM/", "title": "GLM", "text": ""}, {"location": "available_software/detail/GLM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLM, load one of these modules using a module load command like:

                  module load GLM/0.9.9.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLM/0.9.9.8-GCCcore-10.2.0 x x x x x x GLM/0.9.9.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLPK/", "title": "GLPK", "text": ""}, {"location": "available_software/detail/GLPK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLPK, load one of these modules using a module load command like:

                  module load GLPK/5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLPK/5.0-GCCcore-12.3.0 x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x GLPK/5.0-GCCcore-11.3.0 x x x x x x GLPK/5.0-GCCcore-11.2.0 x x x x x x GLPK/5.0-GCCcore-10.3.0 x x x x x x GLPK/4.65-GCCcore-10.2.0 x x x x x x GLPK/4.65-GCCcore-9.3.0 - x x - x x GLPK/4.65-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GLib/", "title": "GLib", "text": ""}, {"location": "available_software/detail/GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLib, load one of these modules using a module load command like:

                  module load GLib/2.77.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLib/2.77.1-GCCcore-12.3.0 x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x GLib/2.72.1-GCCcore-11.3.0 x x x x x x GLib/2.69.1-GCCcore-11.2.0 x x x x x x GLib/2.68.2-GCCcore-10.3.0 x x x x x x GLib/2.66.1-GCCcore-10.2.0 x x x x x x GLib/2.64.1-GCCcore-9.3.0 x x x x x x GLib/2.62.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/GLibmm/", "title": "GLibmm", "text": ""}, {"location": "available_software/detail/GLibmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GLibmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GLibmm, load one of these modules using a module load command like:

                  module load GLibmm/2.66.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GLibmm/2.66.4-GCCcore-10.3.0 - x x - x x GLibmm/2.49.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GMAP-GSNAP/", "title": "GMAP-GSNAP", "text": ""}, {"location": "available_software/detail/GMAP-GSNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMAP-GSNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMAP-GSNAP, load one of these modules using a module load command like:

                  module load GMAP-GSNAP/2023-04-20-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMAP-GSNAP/2023-04-20-GCC-12.2.0 x x x x x x GMAP-GSNAP/2023-02-17-GCC-11.3.0 x x x x x x GMAP-GSNAP/2019-09-12-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/GMP/", "title": "GMP", "text": ""}, {"location": "available_software/detail/GMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GMP, load one of these modules using a module load command like:

                  module load GMP/6.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GMP/6.2.1-GCCcore-12.3.0 x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x GMP/6.2.1-GCCcore-11.3.0 x x x x x x GMP/6.2.1-GCCcore-11.2.0 x x x x x x GMP/6.2.1-GCCcore-10.3.0 x x x x x x GMP/6.2.0-GCCcore-10.2.0 x x x x x x GMP/6.2.0-GCCcore-9.3.0 x x x x x x GMP/6.1.2-GCCcore-8.3.0 x x x x x x GMP/6.1.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/GOATOOLS/", "title": "GOATOOLS", "text": ""}, {"location": "available_software/detail/GOATOOLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GOATOOLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GOATOOLS, load one of these modules using a module load command like:

                  module load GOATOOLS/1.3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GOATOOLS/1.3.1-foss-2022a x x x x x x GOATOOLS/1.3.1-foss-2021b x x x x x x GOATOOLS/1.1.6-foss-2020b - x x x x x"}, {"location": "available_software/detail/GObject-Introspection/", "title": "GObject-Introspection", "text": ""}, {"location": "available_software/detail/GObject-Introspection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GObject-Introspection, load one of these modules using a module load command like:

                  module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x GObject-Introspection/1.72.0-GCCcore-11.3.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-11.2.0 x x x x x x GObject-Introspection/1.68.0-GCCcore-10.3.0 x x x x x x GObject-Introspection/1.66.1-GCCcore-10.2.0 x x x x x x GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x GObject-Introspection/1.63.1-GCCcore-8.3.0-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/GPAW-setups/", "title": "GPAW-setups", "text": ""}, {"location": "available_software/detail/GPAW-setups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW-setups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW-setups, load one of these modules using a module load command like:

                  module load GPAW-setups/0.9.20000\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW-setups/0.9.20000 x x x x x x"}, {"location": "available_software/detail/GPAW/", "title": "GPAW", "text": ""}, {"location": "available_software/detail/GPAW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPAW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPAW, load one of these modules using a module load command like:

                  module load GPAW/22.8.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPAW/22.8.0-intel-2022a x x x x x x GPAW/22.8.0-intel-2021b x x x - x x GPAW/22.8.0-foss-2021b x x x - x x GPAW/20.1.0-intel-2019b-Python-3.7.4 - x x - x x GPAW/20.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GPy/", "title": "GPy", "text": ""}, {"location": "available_software/detail/GPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPy, load one of these modules using a module load command like:

                  module load GPy/1.10.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPy/1.10.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/GPyOpt/", "title": "GPyOpt", "text": ""}, {"location": "available_software/detail/GPyOpt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyOpt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyOpt, load one of these modules using a module load command like:

                  module load GPyOpt/1.2.6-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyOpt/1.2.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/GPyTorch/", "title": "GPyTorch", "text": ""}, {"location": "available_software/detail/GPyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GPyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GPyTorch, load one of these modules using a module load command like:

                  module load GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GPyTorch/1.6.0-foss-2021a-CUDA-11.3.1 x - - - x - GPyTorch/1.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/GRASP-suite/", "title": "GRASP-suite", "text": ""}, {"location": "available_software/detail/GRASP-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASP-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASP-suite, load one of these modules using a module load command like:

                  module load GRASP-suite/2023-05-09-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASP-suite/2023-05-09-Java-17 x x x x x x"}, {"location": "available_software/detail/GRASS/", "title": "GRASS", "text": ""}, {"location": "available_software/detail/GRASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GRASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GRASS, load one of these modules using a module load command like:

                  module load GRASS/8.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GRASS/8.2.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/GROMACS/", "title": "GROMACS", "text": ""}, {"location": "available_software/detail/GROMACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GROMACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GROMACS, load one of these modules using a module load command like:

                  module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GROMACS/2021.3-foss-2021a-CUDA-11.3.1-PLUMED-2.7.2 x - - - x - GROMACS/2021.3-foss-2021a-CUDA-11.3.1 x - - - x - GROMACS/2021.2-fosscuda-2020b x - - - x - GROMACS/2021-foss-2020b - x x x x x GROMACS/2020-foss-2019b - x x - x - GROMACS/2019.4-foss-2019b - x x - x - GROMACS/2019.3-foss-2019b - x x - x -"}, {"location": "available_software/detail/GSL/", "title": "GSL", "text": ""}, {"location": "available_software/detail/GSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GSL, load one of these modules using a module load command like:

                  module load GSL/2.7-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GSL/2.7-intel-compilers-2021.4.0 x x x - x x GSL/2.7-GCC-12.3.0 x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x GSL/2.7-GCC-11.3.0 x x x x x x GSL/2.7-GCC-11.2.0 x x x x x x GSL/2.7-GCC-10.3.0 x x x x x x GSL/2.6-iccifort-2020.4.304 - x x x x x GSL/2.6-iccifort-2020.1.217 - x x - x x GSL/2.6-iccifort-2019.5.281 - x x - x x GSL/2.6-GCC-10.2.0 x x x x x x GSL/2.6-GCC-9.3.0 - x x x x x GSL/2.6-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GST-plugins-bad/", "title": "GST-plugins-bad", "text": ""}, {"location": "available_software/detail/GST-plugins-bad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-bad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-bad, load one of these modules using a module load command like:

                  module load GST-plugins-bad/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-bad/1.20.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GST-plugins-base/", "title": "GST-plugins-base", "text": ""}, {"location": "available_software/detail/GST-plugins-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GST-plugins-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GST-plugins-base, load one of these modules using a module load command like:

                  module load GST-plugins-base/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GST-plugins-base/1.20.2-GCC-11.3.0 x x x x x x GST-plugins-base/1.18.5-GCC-11.2.0 x x x x x x GST-plugins-base/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GStreamer/", "title": "GStreamer", "text": ""}, {"location": "available_software/detail/GStreamer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GStreamer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GStreamer, load one of these modules using a module load command like:

                  module load GStreamer/1.20.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GStreamer/1.20.2-GCC-11.3.0 x x x x x x GStreamer/1.18.5-GCC-11.2.0 x x x x x x GStreamer/1.18.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTDB-Tk/", "title": "GTDB-Tk", "text": ""}, {"location": "available_software/detail/GTDB-Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTDB-Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTDB-Tk, load one of these modules using a module load command like:

                  module load GTDB-Tk/2.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTDB-Tk/2.3.2-foss-2023a x x x x x x GTDB-Tk/2.0.0-intel-2021b x x x - x x GTDB-Tk/1.7.0-intel-2020b - x x - x x GTDB-Tk/1.5.0-intel-2020b - x x - x x GTDB-Tk/1.3.0-intel-2020a-Python-3.8.2 - x x - x x GTDB-Tk/1.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GTK%2B/", "title": "GTK+", "text": ""}, {"location": "available_software/detail/GTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK+, load one of these modules using a module load command like:

                  module load GTK+/3.24.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK+/3.24.23-GCCcore-10.2.0 x x x x x x GTK+/3.24.13-GCCcore-8.3.0 - x x - x x GTK+/2.24.33-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/GTK2/", "title": "GTK2", "text": ""}, {"location": "available_software/detail/GTK2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK2, load one of these modules using a module load command like:

                  module load GTK2/2.24.33-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK2/2.24.33-GCCcore-11.3.0 x x x x x x GTK2/2.24.33-GCCcore-10.3.0 - - x - x -"}, {"location": "available_software/detail/GTK3/", "title": "GTK3", "text": ""}, {"location": "available_software/detail/GTK3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK3, load one of these modules using a module load command like:

                  module load GTK3/3.24.37-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK3/3.24.37-GCCcore-12.3.0 x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x GTK3/3.24.31-GCCcore-11.2.0 x x x x x x GTK3/3.24.29-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/GTK4/", "title": "GTK4", "text": ""}, {"location": "available_software/detail/GTK4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTK4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTK4, load one of these modules using a module load command like:

                  module load GTK4/4.7.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTK4/4.7.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/GTS/", "title": "GTS", "text": ""}, {"location": "available_software/detail/GTS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GTS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GTS, load one of these modules using a module load command like:

                  module load GTS/0.7.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GTS/0.7.6-foss-2019b - x x - x x GTS/0.7.6-GCCcore-12.3.0 x x x x x x GTS/0.7.6-GCCcore-11.3.0 x x x x x x GTS/0.7.6-GCCcore-11.2.0 x x x x x x GTS/0.7.6-GCCcore-10.3.0 x x x x x x GTS/0.7.6-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/GUSHR/", "title": "GUSHR", "text": ""}, {"location": "available_software/detail/GUSHR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GUSHR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GUSHR, load one of these modules using a module load command like:

                  module load GUSHR/2020-09-28-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GUSHR/2020-09-28-foss-2021b x x x x x x"}, {"location": "available_software/detail/GapFiller/", "title": "GapFiller", "text": ""}, {"location": "available_software/detail/GapFiller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GapFiller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GapFiller, load one of these modules using a module load command like:

                  module load GapFiller/2.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GapFiller/2.1.2-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Gaussian/", "title": "Gaussian", "text": ""}, {"location": "available_software/detail/Gaussian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gaussian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gaussian, load one of these modules using a module load command like:

                  module load Gaussian/g16_C.01-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gaussian/g16_C.01-intel-2022a x x x x x x Gaussian/g16_C.01-intel-2019b - x x - x x Gaussian/g16_C.01-iimpi-2020b x x x x x x"}, {"location": "available_software/detail/Gblocks/", "title": "Gblocks", "text": ""}, {"location": "available_software/detail/Gblocks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gblocks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gblocks, load one of these modules using a module load command like:

                  module load Gblocks/0.91b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gblocks/0.91b x x x x x x"}, {"location": "available_software/detail/Gdk-Pixbuf/", "title": "Gdk-Pixbuf", "text": ""}, {"location": "available_software/detail/Gdk-Pixbuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gdk-Pixbuf, load one of these modules using a module load command like:

                  module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x Gdk-Pixbuf/2.42.8-GCCcore-11.3.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-11.2.0 x x x x x x Gdk-Pixbuf/2.42.6-GCCcore-10.3.0 x x x x x x Gdk-Pixbuf/2.40.0-GCCcore-10.2.0 x x x x x x Gdk-Pixbuf/2.38.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Geant4/", "title": "Geant4", "text": ""}, {"location": "available_software/detail/Geant4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Geant4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Geant4, load one of these modules using a module load command like:

                  module load Geant4/11.0.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Geant4/11.0.2-GCC-11.3.0 x x x x x x Geant4/11.0.2-GCC-11.2.0 x x x - x x Geant4/11.0.1-GCC-11.2.0 x x x x x x Geant4/10.7.1-GCC-11.2.0 x x x x x x Geant4/10.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/GeneMark-ET/", "title": "GeneMark-ET", "text": ""}, {"location": "available_software/detail/GeneMark-ET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GeneMark-ET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GeneMark-ET, load one of these modules using a module load command like:

                  module load GeneMark-ET/4.71-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GeneMark-ET/4.71-GCCcore-11.3.0 x x x x x x GeneMark-ET/4.71-GCCcore-11.2.0 x x x x x x GeneMark-ET/4.65-GCCcore-10.2.0 x x x x x x GeneMark-ET/4.57-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GenomeThreader/", "title": "GenomeThreader", "text": ""}, {"location": "available_software/detail/GenomeThreader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeThreader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeThreader, load one of these modules using a module load command like:

                  module load GenomeThreader/1.7.3-Linux_x86_64-64bit\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeThreader/1.7.3-Linux_x86_64-64bit x x x x x x"}, {"location": "available_software/detail/GenomeWorks/", "title": "GenomeWorks", "text": ""}, {"location": "available_software/detail/GenomeWorks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GenomeWorks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GenomeWorks, load one of these modules using a module load command like:

                  module load GenomeWorks/2021.02.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GenomeWorks/2021.02.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/Gerris/", "title": "Gerris", "text": ""}, {"location": "available_software/detail/Gerris/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gerris installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gerris, load one of these modules using a module load command like:

                  module load Gerris/20131206-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gerris/20131206-gompi-2023a x x x x x x"}, {"location": "available_software/detail/GetOrganelle/", "title": "GetOrganelle", "text": ""}, {"location": "available_software/detail/GetOrganelle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GetOrganelle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GetOrganelle, load one of these modules using a module load command like:

                  module load GetOrganelle/1.7.5.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GetOrganelle/1.7.5.3-foss-2021b x x x - x x GetOrganelle/1.7.4-pre2-foss-2020b - x x x x x GetOrganelle/1.7.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/GffCompare/", "title": "GffCompare", "text": ""}, {"location": "available_software/detail/GffCompare/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GffCompare installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GffCompare, load one of these modules using a module load command like:

                  module load GffCompare/0.12.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GffCompare/0.12.6-GCC-11.2.0 x x x x x x GffCompare/0.11.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Ghostscript/", "title": "Ghostscript", "text": ""}, {"location": "available_software/detail/Ghostscript/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ghostscript, load one of these modules using a module load command like:

                  module load Ghostscript/10.01.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x Ghostscript/9.56.1-GCCcore-11.3.0 x x x x x x Ghostscript/9.54.0-GCCcore-11.2.0 x x x x x x Ghostscript/9.54.0-GCCcore-10.3.0 x x x x x x Ghostscript/9.53.3-GCCcore-10.2.0 x x x x x x Ghostscript/9.52-GCCcore-9.3.0 - x x - x x Ghostscript/9.50-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/GimmeMotifs/", "title": "GimmeMotifs", "text": ""}, {"location": "available_software/detail/GimmeMotifs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GimmeMotifs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GimmeMotifs, load one of these modules using a module load command like:

                  module load GimmeMotifs/0.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GimmeMotifs/0.17.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Giotto-Suite/", "title": "Giotto-Suite", "text": ""}, {"location": "available_software/detail/Giotto-Suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Giotto-Suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Giotto-Suite, load one of these modules using a module load command like:

                  module load Giotto-Suite/3.0.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Giotto-Suite/3.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/GitPython/", "title": "GitPython", "text": ""}, {"location": "available_software/detail/GitPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GitPython, load one of these modules using a module load command like:

                  module load GitPython/3.1.40-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GitPython/3.1.40-GCCcore-12.3.0 x x x x x x GitPython/3.1.31-GCCcore-12.2.0 x x x x x x GitPython/3.1.27-GCCcore-11.3.0 x x x x x x GitPython/3.1.24-GCCcore-11.2.0 x x x - x x GitPython/3.1.14-GCCcore-10.2.0 - x x x x x GitPython/3.1.9-GCCcore-9.3.0-Python-3.8.2 - x x - x x GitPython/3.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/GlimmerHMM/", "title": "GlimmerHMM", "text": ""}, {"location": "available_software/detail/GlimmerHMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlimmerHMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlimmerHMM, load one of these modules using a module load command like:

                  module load GlimmerHMM/3.0.4c-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlimmerHMM/3.0.4c-GCC-10.2.0 - x x x x x GlimmerHMM/3.0.4c-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/GlobalArrays/", "title": "GlobalArrays", "text": ""}, {"location": "available_software/detail/GlobalArrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GlobalArrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GlobalArrays, load one of these modules using a module load command like:

                  module load GlobalArrays/5.8-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GlobalArrays/5.8-iomkl-2021a x x x x x x GlobalArrays/5.8-intel-2021a - x x - x x"}, {"location": "available_software/detail/GnuTLS/", "title": "GnuTLS", "text": ""}, {"location": "available_software/detail/GnuTLS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GnuTLS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GnuTLS, load one of these modules using a module load command like:

                  module load GnuTLS/3.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GnuTLS/3.7.3-GCCcore-11.2.0 x x x x x x GnuTLS/3.7.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Go/", "title": "Go", "text": ""}, {"location": "available_software/detail/Go/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Go installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Go, load one of these modules using a module load command like:

                  module load Go/1.21.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Go/1.21.6 x x x x x x Go/1.21.2 x x x x x x Go/1.17.6 x x x - x x Go/1.17.3 - x x - x - Go/1.14 - - x - x -"}, {"location": "available_software/detail/Gradle/", "title": "Gradle", "text": ""}, {"location": "available_software/detail/Gradle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gradle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gradle, load one of these modules using a module load command like:

                  module load Gradle/8.6-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gradle/8.6-Java-17 x x x x x x"}, {"location": "available_software/detail/GraphMap/", "title": "GraphMap", "text": ""}, {"location": "available_software/detail/GraphMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap, load one of these modules using a module load command like:

                  module load GraphMap/0.5.2-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap/0.5.2-foss-2019b - - x - x x"}, {"location": "available_software/detail/GraphMap2/", "title": "GraphMap2", "text": ""}, {"location": "available_software/detail/GraphMap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphMap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphMap2, load one of these modules using a module load command like:

                  module load GraphMap2/0.6.4-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphMap2/0.6.4-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphene/", "title": "Graphene", "text": ""}, {"location": "available_software/detail/Graphene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphene, load one of these modules using a module load command like:

                  module load Graphene/1.10.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphene/1.10.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/GraphicsMagick/", "title": "GraphicsMagick", "text": ""}, {"location": "available_software/detail/GraphicsMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GraphicsMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GraphicsMagick, load one of these modules using a module load command like:

                  module load GraphicsMagick/1.3.34-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GraphicsMagick/1.3.34-foss-2019b - x x - x x"}, {"location": "available_software/detail/Graphviz/", "title": "Graphviz", "text": ""}, {"location": "available_software/detail/Graphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Graphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Graphviz, load one of these modules using a module load command like:

                  module load Graphviz/8.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Graphviz/8.1.0-GCCcore-12.3.0 x x x x x x Graphviz/5.0.0-GCCcore-11.3.0 x x x x x x Graphviz/2.50.0-GCCcore-11.2.0 x x x x x x Graphviz/2.47.2-GCCcore-10.3.0 x x x x x x Graphviz/2.47.0-GCCcore-10.2.0-Java-11 - x x x x x Graphviz/2.42.2-foss-2019b-Python-3.7.4 - x x - x x Graphviz/2.42.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/Greenlet/", "title": "Greenlet", "text": ""}, {"location": "available_software/detail/Greenlet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Greenlet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Greenlet, load one of these modules using a module load command like:

                  module load Greenlet/2.0.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Greenlet/2.0.2-foss-2022b x x x x x x Greenlet/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/GroIMP/", "title": "GroIMP", "text": ""}, {"location": "available_software/detail/GroIMP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which GroIMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using GroIMP, load one of these modules using a module load command like:

                  module load GroIMP/1.5-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty GroIMP/1.5-Java-1.8 - x x - x x"}, {"location": "available_software/detail/Guile/", "title": "Guile", "text": ""}, {"location": "available_software/detail/Guile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guile, load one of these modules using a module load command like:

                  module load Guile/3.0.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guile/3.0.7-GCCcore-11.2.0 x x x x x x Guile/2.2.7-GCCcore-10.3.0 - x x - x x Guile/1.8.8-GCCcore-9.3.0 - x x - x x Guile/1.8.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Guppy/", "title": "Guppy", "text": ""}, {"location": "available_software/detail/Guppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Guppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Guppy, load one of these modules using a module load command like:

                  module load Guppy/6.5.7-gpu\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Guppy/6.5.7-gpu x - x - x - Guppy/6.5.7-cpu x x - x - x Guppy/6.4.6-gpu x - x - x - Guppy/6.4.6-cpu - x x x x x Guppy/6.4.2-gpu x - - - x - Guppy/6.4.2-cpu - x x - x x Guppy/6.3.8-gpu x - - - x - Guppy/6.3.8-cpu - x x - x x Guppy/6.3.7-gpu x - - - x - Guppy/6.3.7-cpu - x x - x x Guppy/6.1.7-gpu x - - - x - Guppy/6.1.7-cpu - x x - x x Guppy/6.1.2-gpu x - - - x - Guppy/6.1.2-cpu - x x - x x Guppy/6.0.1-gpu x - - - x - Guppy/6.0.1-cpu - x x - x x Guppy/5.0.16-gpu x - - - x - Guppy/5.0.16-cpu - x x - x - Guppy/5.0.15-gpu x - - - x - Guppy/5.0.15-cpu - x x - x x Guppy/5.0.14-gpu - - - - x - Guppy/5.0.14-cpu - x x - x x Guppy/5.0.11-gpu - - - - x - Guppy/5.0.11-cpu - x x - x x Guppy/5.0.7-gpu - - - - x - Guppy/5.0.7-cpu - x x - x x Guppy/4.4.1-cpu - x x - x - Guppy/4.2.2-cpu - x x - x - Guppy/4.0.15-cpu - x x - x - Guppy/3.5.2-cpu - - x - x -"}, {"location": "available_software/detail/Gurobi/", "title": "Gurobi", "text": ""}, {"location": "available_software/detail/Gurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Gurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Gurobi, load one of these modules using a module load command like:

                  module load Gurobi/11.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Gurobi/11.0.0-GCCcore-12.3.0 x x x x x x Gurobi/9.5.2-GCCcore-11.3.0 x x x x x x Gurobi/9.5.0-GCCcore-11.2.0 x x x x x x Gurobi/9.1.1-GCCcore-10.2.0 - x x x x x Gurobi/9.1.0 - x x - x -"}, {"location": "available_software/detail/HAL/", "title": "HAL", "text": ""}, {"location": "available_software/detail/HAL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HAL, load one of these modules using a module load command like:

                  module load HAL/2.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HAL/2.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/HDBSCAN/", "title": "HDBSCAN", "text": ""}, {"location": "available_software/detail/HDBSCAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDBSCAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDBSCAN, load one of these modules using a module load command like:

                  module load HDBSCAN/0.8.29-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDBSCAN/0.8.29-foss-2022a x x x x x x"}, {"location": "available_software/detail/HDDM/", "title": "HDDM", "text": ""}, {"location": "available_software/detail/HDDM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDDM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDDM, load one of these modules using a module load command like:

                  module load HDDM/0.7.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDDM/0.7.5-intel-2019b-Python-3.7.4 - x - - - x HDDM/0.7.5-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/HDF/", "title": "HDF", "text": ""}, {"location": "available_software/detail/HDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF, load one of these modules using a module load command like:

                  module load HDF/4.2.16-2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x HDF/4.2.15-GCCcore-11.3.0 x x x x x x HDF/4.2.15-GCCcore-11.2.0 x x x x x x HDF/4.2.15-GCCcore-10.3.0 x x x x x x HDF/4.2.15-GCCcore-10.2.0 - x x x x x HDF/4.2.15-GCCcore-9.3.0 - - x - x x HDF/4.2.14-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/HDF5/", "title": "HDF5", "text": ""}, {"location": "available_software/detail/HDF5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HDF5, load one of these modules using a module load command like:

                  module load HDF5/1.14.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HDF5/1.14.0-gompi-2023a x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x HDF5/1.13.1-gompi-2022a x x x - x x HDF5/1.12.2-iimpi-2022a x x x x x x HDF5/1.12.2-gompi-2022a x x x x x x HDF5/1.12.1-iimpi-2021b x x x x x x HDF5/1.12.1-gompi-2021b x x x x x x HDF5/1.10.8-gompi-2021b x x x - x x HDF5/1.10.7-iompi-2021a x x x x x x HDF5/1.10.7-iimpi-2021a - x x - x x HDF5/1.10.7-iimpi-2020b - x x x x x HDF5/1.10.7-gompic-2020b x - - - x - HDF5/1.10.7-gompi-2021a x x x x x x HDF5/1.10.7-gompi-2020b x x x x x x HDF5/1.10.6-iimpi-2020a x x x x x x HDF5/1.10.6-gompi-2020a - x x - x x HDF5/1.10.5-iimpi-2019b - x x - x x HDF5/1.10.5-gompi-2019b x x x - x x"}, {"location": "available_software/detail/HH-suite/", "title": "HH-suite", "text": ""}, {"location": "available_software/detail/HH-suite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HH-suite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HH-suite, load one of these modules using a module load command like:

                  module load HH-suite/3.3.0-gompic-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HH-suite/3.3.0-gompic-2020b x - - - x - HH-suite/3.3.0-gompi-2022a x x x x x x HH-suite/3.3.0-gompi-2021b x - x - x - HH-suite/3.3.0-gompi-2021a x x x - x x HH-suite/3.3.0-gompi-2020b - x x x x x HH-suite/3.2.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/HISAT2/", "title": "HISAT2", "text": ""}, {"location": "available_software/detail/HISAT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HISAT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HISAT2, load one of these modules using a module load command like:

                  module load HISAT2/2.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HISAT2/2.2.1-gompi-2022a x x x x x x HISAT2/2.2.1-gompi-2021b x x x x x x HISAT2/2.2.1-gompi-2020b - x x x x x"}, {"location": "available_software/detail/HMMER/", "title": "HMMER", "text": ""}, {"location": "available_software/detail/HMMER/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER, load one of these modules using a module load command like:

                  module load HMMER/3.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER/3.4-gompi-2023a x x x x x x HMMER/3.3.2-iimpi-2021b x x x - x x HMMER/3.3.2-iimpi-2020b - x x x x x HMMER/3.3.2-gompic-2020b x - - - x - HMMER/3.3.2-gompi-2022b x x x x x x HMMER/3.3.2-gompi-2022a x x x x x x HMMER/3.3.2-gompi-2021b x x x - x x HMMER/3.3.2-gompi-2021a x x x - x x HMMER/3.3.2-gompi-2020b x x x x x x HMMER/3.3.2-gompi-2020a - x x - x x HMMER/3.3.2-gompi-2019b - x x - x x HMMER/3.3.1-iimpi-2020a - x x - x x HMMER/3.3.1-gompi-2020a - x x - x x HMMER/3.2.1-iimpi-2019b - x x - x x HMMER/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/HMMER2/", "title": "HMMER2", "text": ""}, {"location": "available_software/detail/HMMER2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HMMER2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HMMER2, load one of these modules using a module load command like:

                  module load HMMER2/2.3.2-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HMMER2/2.3.2-GCC-10.3.0 - x x - x x HMMER2/2.3.2-GCC-10.2.0 - x x x x x HMMER2/2.3.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HPL/", "title": "HPL", "text": ""}, {"location": "available_software/detail/HPL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HPL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HPL, load one of these modules using a module load command like:

                  module load HPL/2.3-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HPL/2.3-intel-2019b - x x - x x HPL/2.3-iibff-2020b - x - - - - HPL/2.3-gobff-2020b - x - - - - HPL/2.3-foss-2023b x x x x x x HPL/2.3-foss-2019b - x x - x x HPL/2.0.15-intel-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/HTSeq/", "title": "HTSeq", "text": ""}, {"location": "available_software/detail/HTSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSeq, load one of these modules using a module load command like:

                  module load HTSeq/2.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSeq/2.0.2-foss-2022a x x x x x x HTSeq/0.11.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/HTSlib/", "title": "HTSlib", "text": ""}, {"location": "available_software/detail/HTSlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSlib, load one of these modules using a module load command like:

                  module load HTSlib/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSlib/1.18-GCC-12.3.0 x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x HTSlib/1.15.1-GCC-11.3.0 x x x x x x HTSlib/1.14-GCC-11.2.0 x x x x x x HTSlib/1.12-GCC-10.3.0 x x x - x x HTSlib/1.12-GCC-10.2.0 - x x - x x HTSlib/1.11-GCC-10.2.0 x x x x x x HTSlib/1.10.2-iccifort-2019.5.281 - x x - x x HTSlib/1.10.2-GCC-9.3.0 - x x - x x HTSlib/1.10.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/HTSplotter/", "title": "HTSplotter", "text": ""}, {"location": "available_software/detail/HTSplotter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HTSplotter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HTSplotter, load one of these modules using a module load command like:

                  module load HTSplotter/2.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HTSplotter/2.11-foss-2022b x x x x x x HTSplotter/0.15-foss-2022a x x x x x x"}, {"location": "available_software/detail/Hadoop/", "title": "Hadoop", "text": ""}, {"location": "available_software/detail/Hadoop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hadoop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hadoop, load one of these modules using a module load command like:

                  module load Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hadoop/2.10.0-GCCcore-10.2.0-native-Java-1.8 - - x - x - Hadoop/2.10.0-GCCcore-10.2.0-native - x - - - - Hadoop/2.10.0-GCCcore-8.3.0-native - x x - x x"}, {"location": "available_software/detail/HarfBuzz/", "title": "HarfBuzz", "text": ""}, {"location": "available_software/detail/HarfBuzz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HarfBuzz, load one of these modules using a module load command like:

                  module load HarfBuzz/5.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x HarfBuzz/4.2.1-GCCcore-11.3.0 x x x x x x HarfBuzz/2.8.2-GCCcore-11.2.0 x x x x x x HarfBuzz/2.8.1-GCCcore-10.3.0 x x x x x x HarfBuzz/2.6.7-GCCcore-10.2.0 x x x x x x HarfBuzz/2.6.4-GCCcore-9.3.0 - x x - x x HarfBuzz/2.6.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/HiCExplorer/", "title": "HiCExplorer", "text": ""}, {"location": "available_software/detail/HiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCExplorer, load one of these modules using a module load command like:

                  module load HiCExplorer/3.7.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCExplorer/3.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/HiCMatrix/", "title": "HiCMatrix", "text": ""}, {"location": "available_software/detail/HiCMatrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HiCMatrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HiCMatrix, load one of these modules using a module load command like:

                  module load HiCMatrix/17-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HiCMatrix/17-foss-2022a x x x x x x"}, {"location": "available_software/detail/HighFive/", "title": "HighFive", "text": ""}, {"location": "available_software/detail/HighFive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HighFive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HighFive, load one of these modules using a module load command like:

                  module load HighFive/2.7.1-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HighFive/2.7.1-gompi-2023a x x x x x x"}, {"location": "available_software/detail/Highway/", "title": "Highway", "text": ""}, {"location": "available_software/detail/Highway/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Highway, load one of these modules using a module load command like:

                  module load Highway/1.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Highway/1.0.4-GCCcore-12.3.0 x x x x x x Highway/1.0.4-GCCcore-11.3.0 x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x Highway/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Horovod/", "title": "Horovod", "text": ""}, {"location": "available_software/detail/Horovod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Horovod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Horovod, load one of these modules using a module load command like:

                  module load Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Horovod/0.23.0-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Horovod/0.23.0-foss-2021a-CUDA-11.3.1-PyTorch-1.10.0 x - - - - - Horovod/0.22.0-fosscuda-2020b-PyTorch-1.8.1 x - - - - - Horovod/0.21.3-fosscuda-2020b-PyTorch-1.7.1 x - - - x - Horovod/0.21.1-fosscuda-2020b-TensorFlow-2.4.1 x - - - x -"}, {"location": "available_software/detail/HyPo/", "title": "HyPo", "text": ""}, {"location": "available_software/detail/HyPo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which HyPo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using HyPo, load one of these modules using a module load command like:

                  module load HyPo/1.0.3-GCC-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty HyPo/1.0.3-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/Hybpiper/", "title": "Hybpiper", "text": ""}, {"location": "available_software/detail/Hybpiper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hybpiper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hybpiper, load one of these modules using a module load command like:

                  module load Hybpiper/2.1.6-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hybpiper/2.1.6-foss-2022b x x x x x x"}, {"location": "available_software/detail/Hydra/", "title": "Hydra", "text": ""}, {"location": "available_software/detail/Hydra/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hydra installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hydra, load one of these modules using a module load command like:

                  module load Hydra/1.1.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hydra/1.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Hyperopt/", "title": "Hyperopt", "text": ""}, {"location": "available_software/detail/Hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hyperopt, load one of these modules using a module load command like:

                  module load Hyperopt/0.2.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hyperopt/0.2.7-foss-2022a x x x x x x Hyperopt/0.2.7-foss-2021a x x x - x x"}, {"location": "available_software/detail/Hypre/", "title": "Hypre", "text": ""}, {"location": "available_software/detail/Hypre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Hypre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Hypre, load one of these modules using a module load command like:

                  module load Hypre/2.25.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Hypre/2.25.0-foss-2022a x x x x x x Hypre/2.24.0-intel-2021b x x x x x x Hypre/2.21.0-foss-2021a - x x - x x Hypre/2.20.0-foss-2020b - x x x x x Hypre/2.18.2-intel-2019b - x x - x x Hypre/2.18.2-foss-2020a - x x - x x Hypre/2.18.2-foss-2019b x x x - x x"}, {"location": "available_software/detail/ICU/", "title": "ICU", "text": ""}, {"location": "available_software/detail/ICU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ICU, load one of these modules using a module load command like:

                  module load ICU/73.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ICU/73.2-GCCcore-12.3.0 x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x ICU/71.1-GCCcore-11.3.0 x x x x x x ICU/69.1-GCCcore-11.2.0 x x x x x x ICU/69.1-GCCcore-10.3.0 x x x x x x ICU/67.1-GCCcore-10.2.0 x x x x x x ICU/66.1-GCCcore-9.3.0 - x x - x x ICU/64.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/IDBA-UD/", "title": "IDBA-UD", "text": ""}, {"location": "available_software/detail/IDBA-UD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IDBA-UD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IDBA-UD, load one of these modules using a module load command like:

                  module load IDBA-UD/1.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IDBA-UD/1.1.3-GCC-11.2.0 x x x - x x IDBA-UD/1.1.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/IGMPlot/", "title": "IGMPlot", "text": ""}, {"location": "available_software/detail/IGMPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGMPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGMPlot, load one of these modules using a module load command like:

                  module load IGMPlot/2.4.2-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGMPlot/2.4.2-iccifort-2019.5.281 - x - - - - IGMPlot/2.4.2-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/IGV/", "title": "IGV", "text": ""}, {"location": "available_software/detail/IGV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IGV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IGV, load one of these modules using a module load command like:

                  module load IGV/2.9.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IGV/2.9.4-Java-11 - x x - x x IGV/2.8.0-Java-11 - x x - x x"}, {"location": "available_software/detail/IOR/", "title": "IOR", "text": ""}, {"location": "available_software/detail/IOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IOR, load one of these modules using a module load command like:

                  module load IOR/3.2.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IOR/3.2.1-gompi-2019b - x x - x x"}, {"location": "available_software/detail/IPython/", "title": "IPython", "text": ""}, {"location": "available_software/detail/IPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IPython, load one of these modules using a module load command like:

                  module load IPython/8.14.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IPython/8.14.0-GCCcore-12.3.0 x x x x x x IPython/8.14.0-GCCcore-12.2.0 x x x x x x IPython/8.5.0-GCCcore-11.3.0 x x x x x x IPython/7.26.0-GCCcore-11.2.0 x x x x x x IPython/7.25.0-GCCcore-10.3.0 x x x x x x IPython/7.18.1-GCCcore-10.2.0 x x x x x x IPython/7.15.0-intel-2020a-Python-3.8.2 x x x x x x IPython/7.15.0-foss-2020a-Python-3.8.2 - x x - x x IPython/7.9.0-intel-2019b-Python-3.7.4 - x x - x x IPython/7.9.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/IQ-TREE/", "title": "IQ-TREE", "text": ""}, {"location": "available_software/detail/IQ-TREE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IQ-TREE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IQ-TREE, load one of these modules using a module load command like:

                  module load IQ-TREE/2.2.2.6-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IQ-TREE/2.2.2.6-gompi-2022b x x x x x x IQ-TREE/2.2.2.6-gompi-2022a x x x x x x IQ-TREE/2.2.2.3-gompi-2022a x x x x x x IQ-TREE/2.2.1-gompi-2021b x x x - x x IQ-TREE/1.6.12-intel-2019b - x x - x x"}, {"location": "available_software/detail/IRkernel/", "title": "IRkernel", "text": ""}, {"location": "available_software/detail/IRkernel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IRkernel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IRkernel, load one of these modules using a module load command like:

                  module load IRkernel/1.2-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IRkernel/1.2-foss-2021a-R-4.1.0 - x x - x x IRkernel/1.1-foss-2019b-R-3.6.2-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ISA-L/", "title": "ISA-L", "text": ""}, {"location": "available_software/detail/ISA-L/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ISA-L installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ISA-L, load one of these modules using a module load command like:

                  module load ISA-L/2.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ISA-L/2.30.0-GCCcore-11.3.0 x x x x x x ISA-L/2.30.0-GCCcore-11.2.0 x x x - x x ISA-L/2.30.0-GCCcore-10.3.0 x x x - x x ISA-L/2.30.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/ITK/", "title": "ITK", "text": ""}, {"location": "available_software/detail/ITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ITK, load one of these modules using a module load command like:

                  module load ITK/5.2.1-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ITK/5.2.1-fosscuda-2020b x - - - x - ITK/5.2.1-foss-2022a x x x x x x ITK/5.2.1-foss-2020b - x x x x x ITK/5.1.2-fosscuda-2020b - - - - x - ITK/5.0.1-foss-2019b-Python-3.7.4 - x x - x x ITK/4.13.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/ImageMagick/", "title": "ImageMagick", "text": ""}, {"location": "available_software/detail/ImageMagick/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ImageMagick, load one of these modules using a module load command like:

                  module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x ImageMagick/7.1.0-37-GCCcore-11.3.0 x x x x x x ImageMagick/7.1.0-4-GCCcore-11.2.0 x x x x x x ImageMagick/7.0.11-14-GCCcore-10.3.0 x x x x x x ImageMagick/7.0.10-35-GCCcore-10.2.0 x x x x x x ImageMagick/7.0.10-1-GCCcore-9.3.0 - x x - x x ImageMagick/7.0.9-5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Imath/", "title": "Imath", "text": ""}, {"location": "available_software/detail/Imath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Imath, load one of these modules using a module load command like:

                  module load Imath/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Imath/3.1.7-GCCcore-12.3.0 x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x Imath/3.1.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Inferelator/", "title": "Inferelator", "text": ""}, {"location": "available_software/detail/Inferelator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Inferelator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Inferelator, load one of these modules using a module load command like:

                  module load Inferelator/0.6.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Inferelator/0.6.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/Infernal/", "title": "Infernal", "text": ""}, {"location": "available_software/detail/Infernal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Infernal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Infernal, load one of these modules using a module load command like:

                  module load Infernal/1.1.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Infernal/1.1.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/InterProScan/", "title": "InterProScan", "text": ""}, {"location": "available_software/detail/InterProScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which InterProScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using InterProScan, load one of these modules using a module load command like:

                  module load InterProScan/5.62-94.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty InterProScan/5.62-94.0-foss-2022b x x x x x x InterProScan/5.52-86.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/IonQuant/", "title": "IonQuant", "text": ""}, {"location": "available_software/detail/IonQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IonQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IonQuant, load one of these modules using a module load command like:

                  module load IonQuant/1.10.12-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IonQuant/1.10.12-Java-11 x x x x x x"}, {"location": "available_software/detail/IsoQuant/", "title": "IsoQuant", "text": ""}, {"location": "available_software/detail/IsoQuant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoQuant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoQuant, load one of these modules using a module load command like:

                  module load IsoQuant/3.3.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoQuant/3.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/IsoSeq/", "title": "IsoSeq", "text": ""}, {"location": "available_software/detail/IsoSeq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which IsoSeq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using IsoSeq, load one of these modules using a module load command like:

                  module load IsoSeq/4.0.0-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty IsoSeq/4.0.0-linux-x86_64 x x x x x x IsoSeq/3.8.2-linux-x86_64 x x x x x x"}, {"location": "available_software/detail/JAGS/", "title": "JAGS", "text": ""}, {"location": "available_software/detail/JAGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JAGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JAGS, load one of these modules using a module load command like:

                  module load JAGS/4.3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JAGS/4.3.2-foss-2022b x x x x x x JAGS/4.3.1-foss-2022a x x x x x x JAGS/4.3.0-foss-2021b x x x - x x JAGS/4.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/JSON-GLib/", "title": "JSON-GLib", "text": ""}, {"location": "available_software/detail/JSON-GLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JSON-GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JSON-GLib, load one of these modules using a module load command like:

                  module load JSON-GLib/1.6.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JSON-GLib/1.6.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Jansson/", "title": "Jansson", "text": ""}, {"location": "available_software/detail/Jansson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jansson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jansson, load one of these modules using a module load command like:

                  module load Jansson/2.13.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jansson/2.13.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/JasPer/", "title": "JasPer", "text": ""}, {"location": "available_software/detail/JasPer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JasPer, load one of these modules using a module load command like:

                  module load JasPer/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JasPer/4.0.0-GCCcore-12.3.0 x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x JasPer/2.0.33-GCCcore-11.3.0 x x x x x x JasPer/2.0.33-GCCcore-11.2.0 x x x x x x JasPer/2.0.28-GCCcore-10.3.0 x x x x x x JasPer/2.0.24-GCCcore-10.2.0 x x x x x x JasPer/2.0.14-GCCcore-9.3.0 - x x - x x JasPer/2.0.14-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Java/", "title": "Java", "text": ""}, {"location": "available_software/detail/Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Java, load one of these modules using a module load command like:

                  module load Java/17.0.6\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Java/17.0.6 x x x x x x Java/17(@Java/17.0.6) x x x x x x Java/13.0.2 - x x - x x Java/13(@Java/13.0.2) - x x - x x Java/11.0.20 x x x x x x Java/11.0.18 x - - x x - Java/11.0.16 x x x x x x Java/11.0.2 x x x - x x Java/11(@Java/11.0.20) x x x x x x Java/1.8.0_311 x - x x x x Java/1.8.0_241 - x - - - - Java/1.8.0_221 - x - - - - Java/1.8(@Java/1.8.0_311) x - x x x x Java/1.8(@Java/1.8.0_241) - x - - - -"}, {"location": "available_software/detail/Jellyfish/", "title": "Jellyfish", "text": ""}, {"location": "available_software/detail/Jellyfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Jellyfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Jellyfish, load one of these modules using a module load command like:

                  module load Jellyfish/2.3.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Jellyfish/2.3.0-GCC-11.3.0 x x x x x x Jellyfish/2.3.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/JsonCpp/", "title": "JsonCpp", "text": ""}, {"location": "available_software/detail/JsonCpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JsonCpp, load one of these modules using a module load command like:

                  module load JsonCpp/1.9.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x JsonCpp/1.9.5-GCCcore-12.2.0 x x x x x x JsonCpp/1.9.5-GCCcore-11.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-11.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.3.0 x x x x x x JsonCpp/1.9.4-GCCcore-10.2.0 x x x x x x JsonCpp/1.9.4-GCCcore-9.3.0 - x x - x x JsonCpp/1.9.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Judy/", "title": "Judy", "text": ""}, {"location": "available_software/detail/Judy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Judy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Judy, load one of these modules using a module load command like:

                  module load Judy/1.0.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Judy/1.0.5-GCCcore-11.3.0 x x x x x x Judy/1.0.5-GCCcore-11.2.0 x x x x x x Judy/1.0.5-GCCcore-10.3.0 x x x - x x Judy/1.0.5-GCCcore-10.2.0 - x x x x x Judy/1.0.5-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Julia/", "title": "Julia", "text": ""}, {"location": "available_software/detail/Julia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Julia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Julia, load one of these modules using a module load command like:

                  module load Julia/1.9.3-linux-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Julia/1.9.3-linux-x86_64 x x x x x x Julia/1.7.2-linux-x86_64 x x x x x x Julia/1.6.2-linux-x86_64 - x x - x x"}, {"location": "available_software/detail/JupyterHub/", "title": "JupyterHub", "text": ""}, {"location": "available_software/detail/JupyterHub/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterHub installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterHub, load one of these modules using a module load command like:

                  module load JupyterHub/4.0.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterHub/4.0.1-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/JupyterLab/", "title": "JupyterLab", "text": ""}, {"location": "available_software/detail/JupyterLab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterLab, load one of these modules using a module load command like:

                  module load JupyterLab/4.0.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x JupyterLab/4.0.3-GCCcore-12.2.0 x x x x x x JupyterLab/3.5.0-GCCcore-11.3.0 x x x x x x JupyterLab/3.1.6-GCCcore-11.2.0 x x x - x x JupyterLab/3.0.16-GCCcore-10.3.0 x - x - x - JupyterLab/2.2.8-GCCcore-10.2.0 x x x x x x JupyterLab/1.2.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/JupyterNotebook/", "title": "JupyterNotebook", "text": ""}, {"location": "available_software/detail/JupyterNotebook/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using JupyterNotebook, load one of these modules using a module load command like:

                  module load JupyterNotebook/7.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty JupyterNotebook/7.0.3-GCCcore-12.2.0 x x x x x x JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x JupyterNotebook/6.4.12-SAGE-10.2 x x x x x x JupyterNotebook/6.4.12-SAGE-10.1 x x x x x x JupyterNotebook/6.4.12-SAGE-9.8 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.3.0-IPython-8.5.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-11.2.0-IPython-7.26.0 x x x x x x JupyterNotebook/6.4.0-GCCcore-10.3.0-IPython-7.25.0 x x x x x x JupyterNotebook/6.1.4-GCCcore-10.2.0-IPython-7.18.1 x x x x x x JupyterNotebook/6.0.3-intel-2020a-Python-3.8.2-IPython-7.15.0 x x x x x x JupyterNotebook/6.0.3-foss-2020a-Python-3.8.2-IPython-7.15.0 - x x - x x JupyterNotebook/6.0.2-intel-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x JupyterNotebook/6.0.2-foss-2019b-Python-3.7.4-IPython-7.9.0 - x x - x x"}, {"location": "available_software/detail/KMC/", "title": "KMC", "text": ""}, {"location": "available_software/detail/KMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KMC, load one of these modules using a module load command like:

                  module load KMC/3.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KMC/3.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x KMC/3.2.1-GCC-11.2.0 x x x - x x KMC/3.1.2rc1-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/KaHIP/", "title": "KaHIP", "text": ""}, {"location": "available_software/detail/KaHIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KaHIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KaHIP, load one of these modules using a module load command like:

                  module load KaHIP/3.14-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KaHIP/3.14-gompi-2022a - - - x - -"}, {"location": "available_software/detail/Kaleido/", "title": "Kaleido", "text": ""}, {"location": "available_software/detail/Kaleido/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kaleido installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kaleido, load one of these modules using a module load command like:

                  module load Kaleido/0.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kaleido/0.1.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Kalign/", "title": "Kalign", "text": ""}, {"location": "available_software/detail/Kalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kalign, load one of these modules using a module load command like:

                  module load Kalign/3.3.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kalign/3.3.5-GCCcore-11.3.0 x x x x x x Kalign/3.3.2-GCCcore-11.2.0 x - x - x - Kalign/3.3.1-GCCcore-10.3.0 x x x - x x Kalign/3.3.1-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Kent_tools/", "title": "Kent_tools", "text": ""}, {"location": "available_software/detail/Kent_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kent_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kent_tools, load one of these modules using a module load command like:

                  module load Kent_tools/20190326-linux.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kent_tools/20190326-linux.x86_64 - - x - x - Kent_tools/422-GCC-11.2.0 x x x x x x Kent_tools/411-GCC-10.2.0 - x x x x x Kent_tools/401-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Keras/", "title": "Keras", "text": ""}, {"location": "available_software/detail/Keras/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Keras installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Keras, load one of these modules using a module load command like:

                  module load Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Keras/2.4.3-fosscuda-2020b-TensorFlow-2.5.0 x - - - x - Keras/2.4.3-fosscuda-2020b - - - - x - Keras/2.4.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/KerasTuner/", "title": "KerasTuner", "text": ""}, {"location": "available_software/detail/KerasTuner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KerasTuner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KerasTuner, load one of these modules using a module load command like:

                  module load KerasTuner/1.3.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KerasTuner/1.3.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/Kraken/", "title": "Kraken", "text": ""}, {"location": "available_software/detail/Kraken/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken, load one of these modules using a module load command like:

                  module load Kraken/1.1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken/1.1.1-GCCcore-10.2.0 - x x x x x Kraken/1.1.1-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/Kraken2/", "title": "Kraken2", "text": ""}, {"location": "available_software/detail/Kraken2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Kraken2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Kraken2, load one of these modules using a module load command like:

                  module load Kraken2/2.1.2-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Kraken2/2.1.2-gompi-2021a - x x x x x Kraken2/2.0.9-beta-gompi-2020a-Perl-5.30.2 - x x - x x"}, {"location": "available_software/detail/KrakenUniq/", "title": "KrakenUniq", "text": ""}, {"location": "available_software/detail/KrakenUniq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KrakenUniq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KrakenUniq, load one of these modules using a module load command like:

                  module load KrakenUniq/1.0.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KrakenUniq/1.0.3-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/KronaTools/", "title": "KronaTools", "text": ""}, {"location": "available_software/detail/KronaTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which KronaTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using KronaTools, load one of these modules using a module load command like:

                  module load KronaTools/2.8.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x KronaTools/2.8.1-GCCcore-11.3.0 x x x x x x KronaTools/2.8-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/LAME/", "title": "LAME", "text": ""}, {"location": "available_software/detail/LAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAME, load one of these modules using a module load command like:

                  module load LAME/3.100-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAME/3.100-GCCcore-12.3.0 x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x LAME/3.100-GCCcore-11.3.0 x x x x x x LAME/3.100-GCCcore-11.2.0 x x x x x x LAME/3.100-GCCcore-10.3.0 x x x x x x LAME/3.100-GCCcore-10.2.0 x x x x x x LAME/3.100-GCCcore-9.3.0 - x x - x x LAME/3.100-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/LAMMPS/", "title": "LAMMPS", "text": ""}, {"location": "available_software/detail/LAMMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAMMPS, load one of these modules using a module load command like:

                  module load LAMMPS/patch_20Nov2019-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAMMPS/patch_20Nov2019-intel-2019b - x - - - - LAMMPS/23Jun2022-foss-2021b-kokkos-CUDA-11.4.1 x - - - x - LAMMPS/23Jun2022-foss-2021b-kokkos x x x - x x LAMMPS/23Jun2022-foss-2021a-kokkos - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos-OCTP - x x - x x LAMMPS/7Aug2019-intel-2019b-Python-3.7.4-kokkos - - x - x x LAMMPS/7Aug2019-foss-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-intel-2020a-Python-3.8.2-kokkos - x x - x x LAMMPS/3Mar2020-intel-2019b-Python-3.7.4-kokkos - x x - x x LAMMPS/3Mar2020-foss-2019b-Python-3.7.4-kokkos - x x - x x"}, {"location": "available_software/detail/LAST/", "title": "LAST", "text": ""}, {"location": "available_software/detail/LAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LAST, load one of these modules using a module load command like:

                  module load LAST/1179-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LAST/1179-GCC-10.2.0 - x x x x x LAST/1045-intel-2019b - x x - x x"}, {"location": "available_software/detail/LASTZ/", "title": "LASTZ", "text": ""}, {"location": "available_software/detail/LASTZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LASTZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LASTZ, load one of these modules using a module load command like:

                  module load LASTZ/1.04.22-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LASTZ/1.04.22-GCC-12.3.0 x x x x x x LASTZ/1.04.03-foss-2019b - x x - x x"}, {"location": "available_software/detail/LDC/", "title": "LDC", "text": ""}, {"location": "available_software/detail/LDC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LDC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LDC, load one of these modules using a module load command like:

                  module load LDC/1.30.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LDC/1.30.0-GCCcore-11.3.0 x x x x x x LDC/1.25.1-GCCcore-10.2.0 - x x x x x LDC/1.24.0-x86_64 x x x x x x LDC/0.17.6-x86_64 - x x x x x"}, {"location": "available_software/detail/LERC/", "title": "LERC", "text": ""}, {"location": "available_software/detail/LERC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LERC, load one of these modules using a module load command like:

                  module load LERC/4.0.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LERC/4.0.0-GCCcore-12.3.0 x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x LERC/4.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LIANA%2B/", "title": "LIANA+", "text": ""}, {"location": "available_software/detail/LIANA%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIANA+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIANA+, load one of these modules using a module load command like:

                  module load LIANA+/1.0.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIANA+/1.0.1-foss-2022a x x x x - x"}, {"location": "available_software/detail/LIBSVM/", "title": "LIBSVM", "text": ""}, {"location": "available_software/detail/LIBSVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LIBSVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LIBSVM, load one of these modules using a module load command like:

                  module load LIBSVM/3.30-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LIBSVM/3.30-GCCcore-11.3.0 x x x x x x LIBSVM/3.25-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/LLVM/", "title": "LLVM", "text": ""}, {"location": "available_software/detail/LLVM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LLVM, load one of these modules using a module load command like:

                  module load LLVM/16.0.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LLVM/16.0.6-GCCcore-12.3.0 x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x LLVM/14.0.6-GCCcore-12.2.0-llvmlite x x x x x x LLVM/14.0.3-GCCcore-11.3.0 x x x x x x LLVM/12.0.1-GCCcore-11.2.0 x x x x x x LLVM/11.1.0-GCCcore-10.3.0 x x x x x x LLVM/11.0.0-GCCcore-10.2.0 x x x x x x LLVM/10.0.1-GCCcore-10.2.0 - x x x x x LLVM/9.0.1-GCCcore-9.3.0 - x x - x x LLVM/9.0.0-GCCcore-8.3.0 x x x - x x LLVM/8.0.1-GCCcore-8.3.0 x x x - x x LLVM/7.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/LMDB/", "title": "LMDB", "text": ""}, {"location": "available_software/detail/LMDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMDB, load one of these modules using a module load command like:

                  module load LMDB/0.9.31-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMDB/0.9.31-GCCcore-12.3.0 x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x LMDB/0.9.29-GCCcore-11.3.0 x x x x x x LMDB/0.9.29-GCCcore-11.2.0 x x x x x x LMDB/0.9.28-GCCcore-10.3.0 x x x x x x LMDB/0.9.24-GCCcore-10.2.0 x x x x x x LMDB/0.9.24-GCCcore-9.3.0 - x x - x x LMDB/0.9.24-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LMfit/", "title": "LMfit", "text": ""}, {"location": "available_software/detail/LMfit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LMfit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LMfit, load one of these modules using a module load command like:

                  module load LMfit/1.0.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LMfit/1.0.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LPJmL/", "title": "LPJmL", "text": ""}, {"location": "available_software/detail/LPJmL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPJmL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPJmL, load one of these modules using a module load command like:

                  module load LPJmL/4.0.003-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPJmL/4.0.003-iimpi-2020b - x x x x x"}, {"location": "available_software/detail/LPeg/", "title": "LPeg", "text": ""}, {"location": "available_software/detail/LPeg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LPeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LPeg, load one of these modules using a module load command like:

                  module load LPeg/1.0.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LPeg/1.0.2-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/LSD2/", "title": "LSD2", "text": ""}, {"location": "available_software/detail/LSD2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LSD2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LSD2, load one of these modules using a module load command like:

                  module load LSD2/2.4.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LSD2/2.4.1-GCCcore-12.2.0 x x x x x x LSD2/2.3-GCCcore-11.3.0 x x x x x x LSD2/2.3-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/LUMPY/", "title": "LUMPY", "text": ""}, {"location": "available_software/detail/LUMPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LUMPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LUMPY, load one of these modules using a module load command like:

                  module load LUMPY/0.3.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LUMPY/0.3.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/LZO/", "title": "LZO", "text": ""}, {"location": "available_software/detail/LZO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LZO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LZO, load one of these modules using a module load command like:

                  module load LZO/2.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LZO/2.10-GCCcore-12.3.0 x x x x x x LZO/2.10-GCCcore-11.3.0 x x x x x x LZO/2.10-GCCcore-11.2.0 x x x x x x LZO/2.10-GCCcore-10.3.0 x x x x x x LZO/2.10-GCCcore-10.2.0 - x x x x x LZO/2.10-GCCcore-9.3.0 x x x x x x LZO/2.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/L_RNA_scaffolder/", "title": "L_RNA_scaffolder", "text": ""}, {"location": "available_software/detail/L_RNA_scaffolder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which L_RNA_scaffolder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using L_RNA_scaffolder, load one of these modules using a module load command like:

                  module load L_RNA_scaffolder/20190530-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty L_RNA_scaffolder/20190530-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Lace/", "title": "Lace", "text": ""}, {"location": "available_software/detail/Lace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lace, load one of these modules using a module load command like:

                  module load Lace/1.14.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lace/1.14.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/LevelDB/", "title": "LevelDB", "text": ""}, {"location": "available_software/detail/LevelDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LevelDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LevelDB, load one of these modules using a module load command like:

                  module load LevelDB/1.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LevelDB/1.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Levenshtein/", "title": "Levenshtein", "text": ""}, {"location": "available_software/detail/Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Levenshtein, load one of these modules using a module load command like:

                  module load Levenshtein/0.24.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Levenshtein/0.24.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/LiBis/", "title": "LiBis", "text": ""}, {"location": "available_software/detail/LiBis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LiBis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LiBis, load one of these modules using a module load command like:

                  module load LiBis/20200428-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LiBis/20200428-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/LibLZF/", "title": "LibLZF", "text": ""}, {"location": "available_software/detail/LibLZF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibLZF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibLZF, load one of these modules using a module load command like:

                  module load LibLZF/3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibLZF/3.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/LibSoup/", "title": "LibSoup", "text": ""}, {"location": "available_software/detail/LibSoup/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibSoup, load one of these modules using a module load command like:

                  module load LibSoup/3.0.7-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibSoup/3.0.7-GCC-11.2.0 x x x x x x LibSoup/2.74.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/LibTIFF/", "title": "LibTIFF", "text": ""}, {"location": "available_software/detail/LibTIFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LibTIFF, load one of these modules using a module load command like:

                  module load LibTIFF/4.6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.3.0 x x x x x x LibTIFF/4.3.0-GCCcore-11.2.0 x x x x x x LibTIFF/4.2.0-GCCcore-10.3.0 x x x x x x LibTIFF/4.1.0-GCCcore-10.2.0 x x x x x x LibTIFF/4.1.0-GCCcore-9.3.0 - x x - x x LibTIFF/4.0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Libint/", "title": "Libint", "text": ""}, {"location": "available_software/detail/Libint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Libint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Libint, load one of these modules using a module load command like:

                  module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-12.2.0-lmax-6-cp2k x x x x x x Libint/2.7.2-GCC-11.3.0-lmax-6-cp2k x x x x x x Libint/2.6.0-iimpi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-iimpi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-iccifort-2020.4.304-lmax-6-cp2k - x x - x - Libint/2.6.0-gompi-2020b-lmax-6-cp2k - x - - - - Libint/2.6.0-gompi-2020a-lmax-6-cp2k - x x - x x Libint/2.6.0-GCC-10.3.0-lmax-6-cp2k - x x x x x Libint/2.6.0-GCC-10.2.0-lmax-6-cp2k - x x x x x Libint/1.1.6-iomkl-2020a - x - - - - Libint/1.1.6-intel-2020a - x x - x x Libint/1.1.6-intel-2019b - x - - - - Libint/1.1.6-foss-2020a - x - - - -"}, {"location": "available_software/detail/Lighter/", "title": "Lighter", "text": ""}, {"location": "available_software/detail/Lighter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lighter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lighter, load one of these modules using a module load command like:

                  module load Lighter/1.1.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lighter/1.1.2-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/LittleCMS/", "title": "LittleCMS", "text": ""}, {"location": "available_software/detail/LittleCMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LittleCMS, load one of these modules using a module load command like:

                  module load LittleCMS/2.15-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LittleCMS/2.15-GCCcore-12.3.0 x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x LittleCMS/2.13.1-GCCcore-11.3.0 x x x x x x LittleCMS/2.12-GCCcore-11.2.0 x x x x x x LittleCMS/2.12-GCCcore-10.3.0 x x x x x x LittleCMS/2.11-GCCcore-10.2.0 x x x x x x LittleCMS/2.9-GCCcore-9.3.0 - x x - x x LittleCMS/2.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/LncLOOM/", "title": "LncLOOM", "text": ""}, {"location": "available_software/detail/LncLOOM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LncLOOM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LncLOOM, load one of these modules using a module load command like:

                  module load LncLOOM/2.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LncLOOM/2.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/LoRDEC/", "title": "LoRDEC", "text": ""}, {"location": "available_software/detail/LoRDEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LoRDEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LoRDEC, load one of these modules using a module load command like:

                  module load LoRDEC/0.9-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LoRDEC/0.9-gompi-2022a x x x x x x"}, {"location": "available_software/detail/Longshot/", "title": "Longshot", "text": ""}, {"location": "available_software/detail/Longshot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Longshot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Longshot, load one of these modules using a module load command like:

                  module load Longshot/0.4.5-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Longshot/0.4.5-GCCcore-11.3.0 x x x x x x Longshot/0.4.3-GCCcore-10.2.0 - - x - x - Longshot/0.4.1-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/LtrDetector/", "title": "LtrDetector", "text": ""}, {"location": "available_software/detail/LtrDetector/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which LtrDetector installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using LtrDetector, load one of these modules using a module load command like:

                  module load LtrDetector/1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty LtrDetector/1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Lua/", "title": "Lua", "text": ""}, {"location": "available_software/detail/Lua/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Lua, load one of these modules using a module load command like:

                  module load Lua/5.4.6-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Lua/5.4.6-GCCcore-12.3.0 x x x x x x Lua/5.4.4-GCCcore-11.3.0 x x x x x x Lua/5.4.3-GCCcore-11.2.0 x x x x x x Lua/5.4.3-GCCcore-10.3.0 x x x x x x Lua/5.4.2-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-10.2.0 x x x x x x Lua/5.3.5-GCCcore-9.3.0 - x x - x x Lua/5.1.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/M1QN3/", "title": "M1QN3", "text": ""}, {"location": "available_software/detail/M1QN3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M1QN3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M1QN3, load one of these modules using a module load command like:

                  module load M1QN3/3.3-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M1QN3/3.3-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/M4/", "title": "M4", "text": ""}, {"location": "available_software/detail/M4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which M4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using M4, load one of these modules using a module load command like:

                  module load M4/1.4.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty M4/1.4.19-GCCcore-13.2.0 x x x x x x M4/1.4.19-GCCcore-12.3.0 x x x x x x M4/1.4.19-GCCcore-12.2.0 x x x x x x M4/1.4.19-GCCcore-11.3.0 x x x x x x M4/1.4.19-GCCcore-11.2.0 x x x x x x M4/1.4.19 x x x x x x M4/1.4.18-GCCcore-10.3.0 x x x x x x M4/1.4.18-GCCcore-10.2.0 x x x x x x M4/1.4.18-GCCcore-9.3.0 x x x x x x M4/1.4.18-GCCcore-8.3.0 x x x x x x M4/1.4.18-GCCcore-8.2.0 - x - - - - M4/1.4.18 x x x x x x M4/1.4.17 x x x x x x"}, {"location": "available_software/detail/MACS2/", "title": "MACS2", "text": ""}, {"location": "available_software/detail/MACS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS2, load one of these modules using a module load command like:

                  module load MACS2/2.2.7.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS2/2.2.7.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/MACS3/", "title": "MACS3", "text": ""}, {"location": "available_software/detail/MACS3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MACS3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MACS3, load one of these modules using a module load command like:

                  module load MACS3/3.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MACS3/3.0.1-gfbf-2023a x x x x x x MACS3/3.0.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/MAFFT/", "title": "MAFFT", "text": ""}, {"location": "available_software/detail/MAFFT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAFFT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAFFT, load one of these modules using a module load command like:

                  module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x MAFFT/7.505-GCC-11.3.0-with-extensions x x x x x x MAFFT/7.490-gompi-2021b-with-extensions x x x - x x MAFFT/7.475-gompi-2020b-with-extensions - x x x x x MAFFT/7.475-GCC-10.2.0-with-extensions - x x x x x MAFFT/7.453-iimpi-2020a-with-extensions - x x - x x MAFFT/7.453-iccifort-2019.5.281-with-extensions - x x - x x MAFFT/7.453-GCC-9.3.0-with-extensions - x x - x x MAFFT/7.453-GCC-8.3.0-with-extensions - x x - x x"}, {"location": "available_software/detail/MAGeCK/", "title": "MAGeCK", "text": ""}, {"location": "available_software/detail/MAGeCK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MAGeCK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MAGeCK, load one of these modules using a module load command like:

                  module load MAGeCK/0.5.9.5-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MAGeCK/0.5.9.5-gfbf-2022b x x x x x x MAGeCK/0.5.9.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/MARS/", "title": "MARS", "text": ""}, {"location": "available_software/detail/MARS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MARS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MARS, load one of these modules using a module load command like:

                  module load MARS/20191101-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MARS/20191101-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATIO/", "title": "MATIO", "text": ""}, {"location": "available_software/detail/MATIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATIO, load one of these modules using a module load command like:

                  module load MATIO/1.5.17-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATIO/1.5.17-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MATLAB/", "title": "MATLAB", "text": ""}, {"location": "available_software/detail/MATLAB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MATLAB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MATLAB, load one of these modules using a module load command like:

                  module load MATLAB/2022b-r5\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MATLAB/2022b-r5 x x x x x x MATLAB/2021b x x x - x x MATLAB/2019b - x x - x x"}, {"location": "available_software/detail/MBROLA/", "title": "MBROLA", "text": ""}, {"location": "available_software/detail/MBROLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MBROLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MBROLA, load one of these modules using a module load command like:

                  module load MBROLA/3.3-GCCcore-9.3.0-voices-20200330\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MBROLA/3.3-GCCcore-9.3.0-voices-20200330 - x x - x x"}, {"location": "available_software/detail/MCL/", "title": "MCL", "text": ""}, {"location": "available_software/detail/MCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MCL, load one of these modules using a module load command like:

                  module load MCL/22.282-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MCL/22.282-GCCcore-12.3.0 x x x x x x MCL/14.137-GCCcore-10.2.0 - x x x x x MCL/14.137-GCCcore-9.3.0 - x x - x x MCL/14.137-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MDAnalysis/", "title": "MDAnalysis", "text": ""}, {"location": "available_software/detail/MDAnalysis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDAnalysis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDAnalysis, load one of these modules using a module load command like:

                  module load MDAnalysis/2.4.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDAnalysis/2.4.2-foss-2022b x x x x x x MDAnalysis/2.4.2-foss-2021a x x x x x x"}, {"location": "available_software/detail/MDTraj/", "title": "MDTraj", "text": ""}, {"location": "available_software/detail/MDTraj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MDTraj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MDTraj, load one of these modules using a module load command like:

                  module load MDTraj/1.9.7-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MDTraj/1.9.7-intel-2022a x x x - x x MDTraj/1.9.7-intel-2021b x x x - x x MDTraj/1.9.7-foss-2022a x x x - x x MDTraj/1.9.7-foss-2021a x x x - x x MDTraj/1.9.5-intel-2020b - x x - x x MDTraj/1.9.5-fosscuda-2020b x - - - x - MDTraj/1.9.5-foss-2020b - x x x x x MDTraj/1.9.4-intel-2020a-Python-3.8.2 - x x - x x MDTraj/1.9.3-intel-2019b-Python-3.7.4 - x x - x x MDTraj/1.9.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MEGA/", "title": "MEGA", "text": ""}, {"location": "available_software/detail/MEGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGA, load one of these modules using a module load command like:

                  module load MEGA/11.0.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGA/11.0.10 - x x - x -"}, {"location": "available_software/detail/MEGAHIT/", "title": "MEGAHIT", "text": ""}, {"location": "available_software/detail/MEGAHIT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAHIT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAHIT, load one of these modules using a module load command like:

                  module load MEGAHIT/1.2.9-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAHIT/1.2.9-GCCcore-12.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.3.0 x x x x x x MEGAHIT/1.2.9-GCCcore-11.2.0 x x x - x x MEGAHIT/1.2.9-GCCcore-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MEGAN/", "title": "MEGAN", "text": ""}, {"location": "available_software/detail/MEGAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEGAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEGAN, load one of these modules using a module load command like:

                  module load MEGAN/6.25.3-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEGAN/6.25.3-Java-17 x x x x x x"}, {"location": "available_software/detail/MEM/", "title": "MEM", "text": ""}, {"location": "available_software/detail/MEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEM, load one of these modules using a module load command like:

                  module load MEM/20191023-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEM/20191023-foss-2020a-R-4.0.0 - - x - x - MEM/20191023-foss-2019b - x x - x -"}, {"location": "available_software/detail/MEME/", "title": "MEME", "text": ""}, {"location": "available_software/detail/MEME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MEME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MEME, load one of these modules using a module load command like:

                  module load MEME/5.5.4-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MEME/5.5.4-gompi-2022b x x x x x x MEME/5.4.1-gompi-2021b-Python-2.7.18 x x x - x x"}, {"location": "available_software/detail/MESS/", "title": "MESS", "text": ""}, {"location": "available_software/detail/MESS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MESS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MESS, load one of these modules using a module load command like:

                  module load MESS/0.1.6-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MESS/0.1.6-foss-2019b - x x - x x"}, {"location": "available_software/detail/METIS/", "title": "METIS", "text": ""}, {"location": "available_software/detail/METIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using METIS, load one of these modules using a module load command like:

                  module load METIS/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty METIS/5.1.0-GCCcore-12.3.0 x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x METIS/5.1.0-GCCcore-11.3.0 x x x x x x METIS/5.1.0-GCCcore-11.2.0 x x x x x x METIS/5.1.0-GCCcore-10.3.0 x x x x x x METIS/5.1.0-GCCcore-10.2.0 x x x x x x METIS/5.1.0-GCCcore-9.3.0 - x x - x x METIS/5.1.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MIGRATE-N/", "title": "MIGRATE-N", "text": ""}, {"location": "available_software/detail/MIGRATE-N/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MIGRATE-N installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MIGRATE-N, load one of these modules using a module load command like:

                  module load MIGRATE-N/5.0.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MIGRATE-N/5.0.4-foss-2021b x x x - x x"}, {"location": "available_software/detail/MMseqs2/", "title": "MMseqs2", "text": ""}, {"location": "available_software/detail/MMseqs2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MMseqs2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MMseqs2, load one of these modules using a module load command like:

                  module load MMseqs2/14-7e284-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MMseqs2/14-7e284-gompi-2023a x x x x x x MMseqs2/14-7e284-gompi-2022a x x x x x x MMseqs2/13-45111-gompi-2021b x x x - x x MMseqs2/13-45111-gompi-2021a x x x - x x MMseqs2/13-45111-gompi-2020b x x x x x x MMseqs2/13-45111-20211019-gompi-2020b - x x x x x MMseqs2/13-45111-20211006-gompi-2020b - x x x x - MMseqs2/12-113e3-gompi-2020b - x - - - - MMseqs2/11-e1a1c-iimpi-2019b - x - - - x MMseqs2/10-6d92c-iimpi-2019b - x x - x x MMseqs2/10-6d92c-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MOABS/", "title": "MOABS", "text": ""}, {"location": "available_software/detail/MOABS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOABS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOABS, load one of these modules using a module load command like:

                  module load MOABS/1.3.9.6-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOABS/1.3.9.6-gompi-2019b - x x - x x"}, {"location": "available_software/detail/MONAI/", "title": "MONAI", "text": ""}, {"location": "available_software/detail/MONAI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MONAI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MONAI, load one of these modules using a module load command like:

                  module load MONAI/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MONAI/1.0.1-foss-2022a-CUDA-11.7.0 x - - - x - MONAI/1.0.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MOOSE/", "title": "MOOSE", "text": ""}, {"location": "available_software/detail/MOOSE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MOOSE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MOOSE, load one of these modules using a module load command like:

                  module load MOOSE/2022-06-10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MOOSE/2022-06-10-foss-2022a x x x - x x MOOSE/2021-05-18-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/MPC/", "title": "MPC", "text": ""}, {"location": "available_software/detail/MPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPC, load one of these modules using a module load command like:

                  module load MPC/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPC/1.3.1-GCCcore-12.3.0 x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x MPC/1.2.1-GCCcore-11.3.0 x x x x x x MPC/1.2.1-GCCcore-11.2.0 x x x x x x MPC/1.2.1-GCCcore-10.2.0 - x x x x x MPC/1.1.0-GCC-9.3.0 - x x - x x MPC/1.1.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/MPFR/", "title": "MPFR", "text": ""}, {"location": "available_software/detail/MPFR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MPFR, load one of these modules using a module load command like:

                  module load MPFR/4.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MPFR/4.2.0-GCCcore-12.3.0 x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x MPFR/4.1.0-GCCcore-11.3.0 x x x x x x MPFR/4.1.0-GCCcore-11.2.0 x x x x x x MPFR/4.1.0-GCCcore-10.3.0 x x x x x x MPFR/4.1.0-GCCcore-10.2.0 x x x x x x MPFR/4.0.2-GCCcore-9.3.0 - x x - x x MPFR/4.0.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/MRtrix/", "title": "MRtrix", "text": ""}, {"location": "available_software/detail/MRtrix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MRtrix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MRtrix, load one of these modules using a module load command like:

                  module load MRtrix/3.0.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MRtrix/3.0.4-foss-2022b x x x x x x MRtrix/3.0.3-foss-2021a - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-3.7.4 - x x - x x MRtrix/3.0-rc-20191217-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MSFragger/", "title": "MSFragger", "text": ""}, {"location": "available_software/detail/MSFragger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MSFragger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MSFragger, load one of these modules using a module load command like:

                  module load MSFragger/4.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MSFragger/4.0-Java-11 x x x x x x"}, {"location": "available_software/detail/MUMPS/", "title": "MUMPS", "text": ""}, {"location": "available_software/detail/MUMPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMPS, load one of these modules using a module load command like:

                  module load MUMPS/5.6.1-foss-2023a-metis\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMPS/5.6.1-foss-2023a-metis x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x MUMPS/5.5.1-foss-2022a-metis x x x x x x MUMPS/5.4.1-intel-2021b-metis x x x x x x MUMPS/5.4.1-foss-2021b-metis x x x - x x MUMPS/5.4.0-foss-2021a-metis - x x - x x MUMPS/5.3.5-foss-2020b-metis - x x x x x MUMPS/5.2.1-intel-2020a-metis - x x - x x MUMPS/5.2.1-intel-2019b-metis - x x - x x MUMPS/5.2.1-foss-2020a-metis - x x - x x MUMPS/5.2.1-foss-2019b-metis x x x - x x"}, {"location": "available_software/detail/MUMmer/", "title": "MUMmer", "text": ""}, {"location": "available_software/detail/MUMmer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUMmer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUMmer, load one of these modules using a module load command like:

                  module load MUMmer/4.0.0rc1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUMmer/4.0.0rc1-GCCcore-12.3.0 x x x x x x MUMmer/4.0.0beta2-GCCcore-11.2.0 x x x - x x MUMmer/4.0.0beta2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MUSCLE/", "title": "MUSCLE", "text": ""}, {"location": "available_software/detail/MUSCLE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MUSCLE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MUSCLE, load one of these modules using a module load command like:

                  module load MUSCLE/5.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MUSCLE/5.1.0-GCCcore-12.3.0 x x x x x x MUSCLE/5.1.0-GCCcore-11.3.0 x x x x x x MUSCLE/5.1-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.1551-GCC-10.2.0 - x x - x x MUSCLE/3.8.1551-GCC-8.3.0 - x x - x x MUSCLE/3.8.31-GCCcore-11.2.0 x x x - x x MUSCLE/3.8.31-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/MXNet/", "title": "MXNet", "text": ""}, {"location": "available_software/detail/MXNet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MXNet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MXNet, load one of these modules using a module load command like:

                  module load MXNet/1.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MXNet/1.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/MaSuRCA/", "title": "MaSuRCA", "text": ""}, {"location": "available_software/detail/MaSuRCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaSuRCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaSuRCA, load one of these modules using a module load command like:

                  module load MaSuRCA/4.1.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaSuRCA/4.1.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/Mako/", "title": "Mako", "text": ""}, {"location": "available_software/detail/Mako/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mako, load one of these modules using a module load command like:

                  module load Mako/1.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mako/1.2.4-GCCcore-12.3.0 x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x Mako/1.2.0-GCCcore-11.3.0 x x x x x x Mako/1.1.4-GCCcore-11.2.0 x x x x x x Mako/1.1.4-GCCcore-10.3.0 x x x x x x Mako/1.1.3-GCCcore-10.2.0 x x x x x x Mako/1.1.2-GCCcore-9.3.0 - x x - x x Mako/1.1.0-GCCcore-8.3.0 x x x - x x Mako/1.0.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/MariaDB-connector-c/", "title": "MariaDB-connector-c", "text": ""}, {"location": "available_software/detail/MariaDB-connector-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB-connector-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB-connector-c, load one of these modules using a module load command like:

                  module load MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB-connector-c/3.1.7-GCCcore-9.3.0 - x x - x x MariaDB-connector-c/2.3.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MariaDB/", "title": "MariaDB", "text": ""}, {"location": "available_software/detail/MariaDB/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MariaDB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MariaDB, load one of these modules using a module load command like:

                  module load MariaDB/10.9.3-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MariaDB/10.9.3-GCC-11.3.0 x x x x x x MariaDB/10.6.4-GCC-11.2.0 x x x x x x MariaDB/10.6.4-GCC-10.3.0 x x x - x x MariaDB/10.5.8-GCC-10.2.0 - x x x x x MariaDB/10.4.13-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Mash/", "title": "Mash", "text": ""}, {"location": "available_software/detail/Mash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mash, load one of these modules using a module load command like:

                  module load Mash/2.3-intel-compilers-2021.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mash/2.3-intel-compilers-2021.4.0 x x x - x x Mash/2.3-GCC-12.3.0 x x x x x x Mash/2.3-GCC-11.2.0 x x x - x x Mash/2.2-GCC-9.3.0 - x x x - x"}, {"location": "available_software/detail/Maven/", "title": "Maven", "text": ""}, {"location": "available_software/detail/Maven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Maven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Maven, load one of these modules using a module load command like:

                  module load Maven/3.6.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Maven/3.6.3 x x x x x x Maven/3.6.0 - - x - x -"}, {"location": "available_software/detail/MaxBin/", "title": "MaxBin", "text": ""}, {"location": "available_software/detail/MaxBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MaxBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MaxBin, load one of these modules using a module load command like:

                  module load MaxBin/2.2.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MaxBin/2.2.7-gompi-2021b x x x - x x MaxBin/2.2.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MedPy/", "title": "MedPy", "text": ""}, {"location": "available_software/detail/MedPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MedPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MedPy, load one of these modules using a module load command like:

                  module load MedPy/0.4.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MedPy/0.4.0-fosscuda-2020b x - - - x - MedPy/0.4.0-foss-2020b - x x x x x MedPy/0.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Megalodon/", "title": "Megalodon", "text": ""}, {"location": "available_software/detail/Megalodon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Megalodon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Megalodon, load one of these modules using a module load command like:

                  module load Megalodon/2.3.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Megalodon/2.3.5-fosscuda-2020b x - - - x - Megalodon/2.3.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/Mercurial/", "title": "Mercurial", "text": ""}, {"location": "available_software/detail/Mercurial/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mercurial installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mercurial, load one of these modules using a module load command like:

                  module load Mercurial/6.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mercurial/6.2-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Mesa/", "title": "Mesa", "text": ""}, {"location": "available_software/detail/Mesa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesa, load one of these modules using a module load command like:

                  module load Mesa/23.1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesa/23.1.4-GCCcore-12.3.0 x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x Mesa/22.0.3-GCCcore-11.3.0 x x x x x x Mesa/21.1.7-GCCcore-11.2.0 x x x x x x Mesa/21.1.1-GCCcore-10.3.0 x x x x x x Mesa/20.2.1-GCCcore-10.2.0 x x x x x x Mesa/20.0.2-GCCcore-9.3.0 - x x - x x Mesa/19.2.1-GCCcore-8.3.0 - x x - x x Mesa/19.1.7-GCCcore-8.3.0 x x x - x x Mesa/19.0.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Meson/", "title": "Meson", "text": ""}, {"location": "available_software/detail/Meson/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Meson, load one of these modules using a module load command like:

                  module load Meson/1.2.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Meson/1.2.3-GCCcore-13.2.0 x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x Meson/0.62.1-GCCcore-11.3.0 x x x x x x Meson/0.59.1-GCCcore-8.3.0-Python-3.7.4 x - x - x x Meson/0.58.2-GCCcore-11.2.0 x x x x x x Meson/0.58.0-GCCcore-10.3.0 x x x x x x Meson/0.55.3-GCCcore-10.2.0 x x x x x x Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x Meson/0.53.2-GCCcore-9.3.0-Python-3.8.2 - x x - x x Meson/0.51.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x Meson/0.50.0-GCCcore-8.2.0-Python-3.7.2 - x - - - -"}, {"location": "available_software/detail/Mesquite/", "title": "Mesquite", "text": ""}, {"location": "available_software/detail/Mesquite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mesquite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mesquite, load one of these modules using a module load command like:

                  module load Mesquite/2.3.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mesquite/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/MetaBAT/", "title": "MetaBAT", "text": ""}, {"location": "available_software/detail/MetaBAT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaBAT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaBAT, load one of these modules using a module load command like:

                  module load MetaBAT/2.15-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaBAT/2.15-gompi-2021b x x x - x x MetaBAT/2.15-gompi-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/MetaEuk/", "title": "MetaEuk", "text": ""}, {"location": "available_software/detail/MetaEuk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaEuk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaEuk, load one of these modules using a module load command like:

                  module load MetaEuk/6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaEuk/6-GCC-11.2.0 x x x - x x MetaEuk/4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/MetaPhlAn/", "title": "MetaPhlAn", "text": ""}, {"location": "available_software/detail/MetaPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MetaPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MetaPhlAn, load one of these modules using a module load command like:

                  module load MetaPhlAn/4.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MetaPhlAn/4.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/Metagenome-Atlas/", "title": "Metagenome-Atlas", "text": ""}, {"location": "available_software/detail/Metagenome-Atlas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Metagenome-Atlas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Metagenome-Atlas, load one of these modules using a module load command like:

                  module load Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Metagenome-Atlas/2.4.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/MethylDackel/", "title": "MethylDackel", "text": ""}, {"location": "available_software/detail/MethylDackel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MethylDackel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MethylDackel, load one of these modules using a module load command like:

                  module load MethylDackel/0.5.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MethylDackel/0.5.0-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/MiXCR/", "title": "MiXCR", "text": ""}, {"location": "available_software/detail/MiXCR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MiXCR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MiXCR, load one of these modules using a module load command like:

                  module load MiXCR/4.6.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MiXCR/4.6.0-Java-17 x x x x x x MiXCR/3.0.13-Java-11 - x x - x -"}, {"location": "available_software/detail/MicrobeAnnotator/", "title": "MicrobeAnnotator", "text": ""}, {"location": "available_software/detail/MicrobeAnnotator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MicrobeAnnotator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MicrobeAnnotator, load one of these modules using a module load command like:

                  module load MicrobeAnnotator/2.0.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MicrobeAnnotator/2.0.5-foss-2021a - x x - x x"}, {"location": "available_software/detail/Mikado/", "title": "Mikado", "text": ""}, {"location": "available_software/detail/Mikado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mikado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mikado, load one of these modules using a module load command like:

                  module load Mikado/2.3.4-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mikado/2.3.4-foss-2022b x x x x x x"}, {"location": "available_software/detail/MinCED/", "title": "MinCED", "text": ""}, {"location": "available_software/detail/MinCED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinCED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinCED, load one of these modules using a module load command like:

                  module load MinCED/0.4.2-GCCcore-8.3.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinCED/0.4.2-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/MinPath/", "title": "MinPath", "text": ""}, {"location": "available_software/detail/MinPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MinPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MinPath, load one of these modules using a module load command like:

                  module load MinPath/1.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MinPath/1.6-GCCcore-11.2.0 x x x - x x MinPath/1.4-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Miniconda3/", "title": "Miniconda3", "text": ""}, {"location": "available_software/detail/Miniconda3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Miniconda3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Miniconda3, load one of these modules using a module load command like:

                  module load Miniconda3/23.5.2-0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Miniconda3/23.5.2-0 x x x x x x Miniconda3/22.11.1-1 x x x x x x Miniconda3/4.9.2 - x x - x x Miniconda3/4.8.3 - x x - x x Miniconda3/4.7.10 - - - - - x"}, {"location": "available_software/detail/Minipolish/", "title": "Minipolish", "text": ""}, {"location": "available_software/detail/Minipolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Minipolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Minipolish, load one of these modules using a module load command like:

                  module load Minipolish/0.1.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Minipolish/0.1.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/MitoHiFi/", "title": "MitoHiFi", "text": ""}, {"location": "available_software/detail/MitoHiFi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MitoHiFi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MitoHiFi, load one of these modules using a module load command like:

                  module load MitoHiFi/3.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MitoHiFi/3.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/ModelTest-NG/", "title": "ModelTest-NG", "text": ""}, {"location": "available_software/detail/ModelTest-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ModelTest-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ModelTest-NG, load one of these modules using a module load command like:

                  module load ModelTest-NG/0.1.7-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ModelTest-NG/0.1.7-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Molden/", "title": "Molden", "text": ""}, {"location": "available_software/detail/Molden/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molden installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molden, load one of these modules using a module load command like:

                  module load Molden/6.8-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molden/6.8-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Molekel/", "title": "Molekel", "text": ""}, {"location": "available_software/detail/Molekel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Molekel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Molekel, load one of these modules using a module load command like:

                  module load Molekel/5.4.0-Linux_x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Molekel/5.4.0-Linux_x86_64 x x x - x x"}, {"location": "available_software/detail/Mono/", "title": "Mono", "text": ""}, {"location": "available_software/detail/Mono/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Mono installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Mono, load one of these modules using a module load command like:

                  module load Mono/6.8.0.105-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Mono/6.8.0.105-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/Monocle3/", "title": "Monocle3", "text": ""}, {"location": "available_software/detail/Monocle3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Monocle3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Monocle3, load one of these modules using a module load command like:

                  module load Monocle3/1.3.1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Monocle3/1.3.1-foss-2022a-R-4.2.1 x x x x x x Monocle3/0.2.3-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/MrBayes/", "title": "MrBayes", "text": ""}, {"location": "available_software/detail/MrBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MrBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MrBayes, load one of these modules using a module load command like:

                  module load MrBayes/3.2.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MrBayes/3.2.7-gompi-2020b - x x x x x MrBayes/3.2.6-gompi-2020b - x x x x x"}, {"location": "available_software/detail/MuJoCo/", "title": "MuJoCo", "text": ""}, {"location": "available_software/detail/MuJoCo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MuJoCo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MuJoCo, load one of these modules using a module load command like:

                  module load MuJoCo/2.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MuJoCo/2.3.7-GCCcore-12.3.0 x x x x x x MuJoCo/2.1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/MultiQC/", "title": "MultiQC", "text": ""}, {"location": "available_software/detail/MultiQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultiQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultiQC, load one of these modules using a module load command like:

                  module load MultiQC/1.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultiQC/1.14-foss-2022a x x x x x x MultiQC/1.9-intel-2020a-Python-3.8.2 - x x - x x MultiQC/1.8-intel-2019b-Python-3.7.4 - x x - x x MultiQC/1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/MultilevelEstimators/", "title": "MultilevelEstimators", "text": ""}, {"location": "available_software/detail/MultilevelEstimators/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MultilevelEstimators installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MultilevelEstimators, load one of these modules using a module load command like:

                  module load MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MultilevelEstimators/0.1.0-GCC-11.2.0-Julia-1.7.2 x x x - x x"}, {"location": "available_software/detail/Multiwfn/", "title": "Multiwfn", "text": ""}, {"location": "available_software/detail/Multiwfn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Multiwfn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Multiwfn, load one of these modules using a module load command like:

                  module load Multiwfn/3.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Multiwfn/3.6-intel-2019b - x x - x x"}, {"location": "available_software/detail/MyCC/", "title": "MyCC", "text": ""}, {"location": "available_software/detail/MyCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which MyCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using MyCC, load one of these modules using a module load command like:

                  module load MyCC/2017-03-01-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty MyCC/2017-03-01-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Myokit/", "title": "Myokit", "text": ""}, {"location": "available_software/detail/Myokit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Myokit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Myokit, load one of these modules using a module load command like:

                  module load Myokit/1.32.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Myokit/1.32.0-fosscuda-2020b - - - - x - Myokit/1.32.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/NAMD/", "title": "NAMD", "text": ""}, {"location": "available_software/detail/NAMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NAMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NAMD, load one of these modules using a module load command like:

                  module load NAMD/2.14-foss-2023a-mpi\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NAMD/2.14-foss-2023a-mpi x x x x x x NAMD/2.14-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/NASM/", "title": "NASM", "text": ""}, {"location": "available_software/detail/NASM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NASM, load one of these modules using a module load command like:

                  module load NASM/2.16.01-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NASM/2.16.01-GCCcore-13.2.0 x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x NASM/2.15.05-GCCcore-11.3.0 x x x x x x NASM/2.15.05-GCCcore-11.2.0 x x x x x x NASM/2.15.05-GCCcore-10.3.0 x x x x x x NASM/2.15.05-GCCcore-10.2.0 x x x x x x NASM/2.14.02-GCCcore-9.3.0 - x x - x x NASM/2.14.02-GCCcore-8.3.0 x x x - x x NASM/2.14.02-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/NCCL/", "title": "NCCL", "text": ""}, {"location": "available_software/detail/NCCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCCL, load one of these modules using a module load command like:

                  module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - NCCL/2.10.3-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - NCCL/2.10.3-GCCcore-10.3.0-CUDA-11.3.1 x - - - x - NCCL/2.8.3-GCCcore-10.2.0-CUDA-11.1.1 x - - - x x NCCL/2.8.3-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/NCL/", "title": "NCL", "text": ""}, {"location": "available_software/detail/NCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCL, load one of these modules using a module load command like:

                  module load NCL/6.6.2-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCL/6.6.2-intel-2019b - - x - x x"}, {"location": "available_software/detail/NCO/", "title": "NCO", "text": ""}, {"location": "available_software/detail/NCO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NCO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NCO, load one of these modules using a module load command like:

                  module load NCO/5.0.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NCO/5.0.6-intel-2019b - x x - x x NCO/5.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/NECI/", "title": "NECI", "text": ""}, {"location": "available_software/detail/NECI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NECI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NECI, load one of these modules using a module load command like:

                  module load NECI/20230620-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NECI/20230620-foss-2022b x x x x x x NECI/20220711-foss-2022a - x x x x x"}, {"location": "available_software/detail/NEURON/", "title": "NEURON", "text": ""}, {"location": "available_software/detail/NEURON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NEURON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NEURON, load one of these modules using a module load command like:

                  module load NEURON/7.8.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NEURON/7.8.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/NGS/", "title": "NGS", "text": ""}, {"location": "available_software/detail/NGS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGS, load one of these modules using a module load command like:

                  module load NGS/2.11.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGS/2.11.2-GCCcore-11.2.0 x x x x x x NGS/2.10.9-GCCcore-10.2.0 - x x x x x NGS/2.10.5-GCCcore-9.3.0 - x x - x x NGS/2.10.4-GCCcore-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/NGSpeciesID/", "title": "NGSpeciesID", "text": ""}, {"location": "available_software/detail/NGSpeciesID/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NGSpeciesID installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NGSpeciesID, load one of these modules using a module load command like:

                  module load NGSpeciesID/0.1.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NGSpeciesID/0.1.2.1-foss-2021b x x x - x x NGSpeciesID/0.1.1.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NLMpy/", "title": "NLMpy", "text": ""}, {"location": "available_software/detail/NLMpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLMpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLMpy, load one of these modules using a module load command like:

                  module load NLMpy/0.1.5-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLMpy/0.1.5-intel-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/NLTK/", "title": "NLTK", "text": ""}, {"location": "available_software/detail/NLTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLTK, load one of these modules using a module load command like:

                  module load NLTK/3.8.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLTK/3.8.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/NLopt/", "title": "NLopt", "text": ""}, {"location": "available_software/detail/NLopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NLopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NLopt, load one of these modules using a module load command like:

                  module load NLopt/2.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NLopt/2.7.1-GCCcore-12.3.0 x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x NLopt/2.7.1-GCCcore-11.3.0 x x x x x x NLopt/2.7.0-GCCcore-11.2.0 x x x x x x NLopt/2.7.0-GCCcore-10.3.0 x x x x x x NLopt/2.6.2-GCCcore-10.2.0 x x x x x x NLopt/2.6.1-GCCcore-9.3.0 - x x - x x NLopt/2.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/NOVOPlasty/", "title": "NOVOPlasty", "text": ""}, {"location": "available_software/detail/NOVOPlasty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NOVOPlasty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NOVOPlasty, load one of these modules using a module load command like:

                  module load NOVOPlasty/3.7-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NOVOPlasty/3.7-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/NSPR/", "title": "NSPR", "text": ""}, {"location": "available_software/detail/NSPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSPR, load one of these modules using a module load command like:

                  module load NSPR/4.35-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSPR/4.35-GCCcore-12.3.0 x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x NSPR/4.34-GCCcore-11.3.0 x x x x x x NSPR/4.32-GCCcore-11.2.0 x x x x x x NSPR/4.30-GCCcore-10.3.0 x x x x x x NSPR/4.29-GCCcore-10.2.0 x x x x x x NSPR/4.25-GCCcore-9.3.0 - x x - x x NSPR/4.21-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NSS/", "title": "NSS", "text": ""}, {"location": "available_software/detail/NSS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NSS, load one of these modules using a module load command like:

                  module load NSS/3.89.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NSS/3.89.1-GCCcore-12.3.0 x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x NSS/3.79-GCCcore-11.3.0 x x x x x x NSS/3.69-GCCcore-11.2.0 x x x x x x NSS/3.65-GCCcore-10.3.0 x x x x x x NSS/3.57-GCCcore-10.2.0 x x x x x x NSS/3.51-GCCcore-9.3.0 - x x - x x NSS/3.45-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/NVHPC/", "title": "NVHPC", "text": ""}, {"location": "available_software/detail/NVHPC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NVHPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NVHPC, load one of these modules using a module load command like:

                  module load NVHPC/21.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NVHPC/21.2 x - x - x - NVHPC/20.9 - - - - x -"}, {"location": "available_software/detail/NanoCaller/", "title": "NanoCaller", "text": ""}, {"location": "available_software/detail/NanoCaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoCaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoCaller, load one of these modules using a module load command like:

                  module load NanoCaller/3.4.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoCaller/3.4.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/NanoComp/", "title": "NanoComp", "text": ""}, {"location": "available_software/detail/NanoComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoComp, load one of these modules using a module load command like:

                  module load NanoComp/1.13.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoComp/1.13.1-intel-2020b - x x - x x NanoComp/1.10.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoFilt/", "title": "NanoFilt", "text": ""}, {"location": "available_software/detail/NanoFilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoFilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoFilt, load one of these modules using a module load command like:

                  module load NanoFilt/2.6.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoFilt/2.6.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoPlot/", "title": "NanoPlot", "text": ""}, {"location": "available_software/detail/NanoPlot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoPlot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoPlot, load one of these modules using a module load command like:

                  module load NanoPlot/1.33.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoPlot/1.33.0-intel-2020b - x x - x x NanoPlot/1.28.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/NanoStat/", "title": "NanoStat", "text": ""}, {"location": "available_software/detail/NanoStat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanoStat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanoStat, load one of these modules using a module load command like:

                  module load NanoStat/1.6.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanoStat/1.6.0-foss-2022a x x x x x x NanoStat/1.6.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/NanopolishComp/", "title": "NanopolishComp", "text": ""}, {"location": "available_software/detail/NanopolishComp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NanopolishComp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NanopolishComp, load one of these modules using a module load command like:

                  module load NanopolishComp/0.6.11-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NanopolishComp/0.6.11-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/NetPyNE/", "title": "NetPyNE", "text": ""}, {"location": "available_software/detail/NetPyNE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NetPyNE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NetPyNE, load one of these modules using a module load command like:

                  module load NetPyNE/1.0.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NetPyNE/1.0.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/NewHybrids/", "title": "NewHybrids", "text": ""}, {"location": "available_software/detail/NewHybrids/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NewHybrids installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NewHybrids, load one of these modules using a module load command like:

                  module load NewHybrids/1.1_Beta3-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NewHybrids/1.1_Beta3-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/NextGenMap/", "title": "NextGenMap", "text": ""}, {"location": "available_software/detail/NextGenMap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NextGenMap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NextGenMap, load one of these modules using a module load command like:

                  module load NextGenMap/0.5.5-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NextGenMap/0.5.5-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Nextflow/", "title": "Nextflow", "text": ""}, {"location": "available_software/detail/Nextflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nextflow, load one of these modules using a module load command like:

                  module load Nextflow/23.10.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nextflow/23.10.0 x x x x x x Nextflow/23.04.2 x x x x x x Nextflow/22.10.5 x x x x x x Nextflow/22.10.0 x x x - x x Nextflow/21.10.6 - x x - x x Nextflow/21.08.0 - - - - - x Nextflow/21.03.0 - x x - x x Nextflow/20.10.0 - x x - x x Nextflow/20.04.1 - - x - x x Nextflow/20.01.0 - - x - x x Nextflow/19.12.0 - - x - x x"}, {"location": "available_software/detail/NiBabel/", "title": "NiBabel", "text": ""}, {"location": "available_software/detail/NiBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which NiBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using NiBabel, load one of these modules using a module load command like:

                  module load NiBabel/4.0.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty NiBabel/4.0.2-foss-2022a x x x x x x NiBabel/3.2.1-fosscuda-2020b x - - - x - NiBabel/3.2.1-foss-2021a x x x - x x NiBabel/3.2.1-foss-2020b - x x x x x NiBabel/3.1.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Nim/", "title": "Nim", "text": ""}, {"location": "available_software/detail/Nim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nim, load one of these modules using a module load command like:

                  module load Nim/1.6.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nim/1.6.6-GCCcore-11.2.0 x x x - x x Nim/1.4.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/Ninja/", "title": "Ninja", "text": ""}, {"location": "available_software/detail/Ninja/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ninja, load one of these modules using a module load command like:

                  module load Ninja/1.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ninja/1.11.1-GCCcore-13.2.0 x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x Ninja/1.10.2-GCCcore-11.3.0 x x x x x x Ninja/1.10.2-GCCcore-11.2.0 x x x x x x Ninja/1.10.2-GCCcore-10.3.0 x x x x x x Ninja/1.10.1-GCCcore-10.2.0 x x x x x x Ninja/1.10.0-GCCcore-9.3.0 x x x x x x Ninja/1.9.0-GCCcore-8.3.0 x x x - x x Ninja/1.9.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Nipype/", "title": "Nipype", "text": ""}, {"location": "available_software/detail/Nipype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Nipype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Nipype, load one of these modules using a module load command like:

                  module load Nipype/1.8.5-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Nipype/1.8.5-foss-2021a x x x - x x Nipype/1.4.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OBITools3/", "title": "OBITools3", "text": ""}, {"location": "available_software/detail/OBITools3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OBITools3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OBITools3, load one of these modules using a module load command like:

                  module load OBITools3/3.0.1b26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OBITools3/3.0.1b26-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/ONNX-Runtime/", "title": "ONNX-Runtime", "text": ""}, {"location": "available_software/detail/ONNX-Runtime/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX-Runtime installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX-Runtime, load one of these modules using a module load command like:

                  module load ONNX-Runtime/1.16.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX-Runtime/1.16.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/ONNX/", "title": "ONNX", "text": ""}, {"location": "available_software/detail/ONNX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ONNX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ONNX, load one of these modules using a module load command like:

                  module load ONNX/1.15.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ONNX/1.15.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/OPERA-MS/", "title": "OPERA-MS", "text": ""}, {"location": "available_software/detail/OPERA-MS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OPERA-MS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OPERA-MS, load one of these modules using a module load command like:

                  module load OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OPERA-MS/0.9.0-20200802-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ORCA/", "title": "ORCA", "text": ""}, {"location": "available_software/detail/ORCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ORCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ORCA, load one of these modules using a module load command like:

                  module load ORCA/5.0.4-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ORCA/5.0.4-gompi-2022a x x x x x x ORCA/5.0.3-gompi-2021b x x x x x x ORCA/5.0.2-gompi-2021b x x x x x x ORCA/4.2.1-gompi-2019b - x x - x x ORCA/4.2.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OSU-Micro-Benchmarks/", "title": "OSU-Micro-Benchmarks", "text": ""}, {"location": "available_software/detail/OSU-Micro-Benchmarks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

                  module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x OSU-Micro-Benchmarks/7.1-1-iimpi-2023a x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a - x - - - - OSU-Micro-Benchmarks/5.8-iimpi-2021b x x x - x x OSU-Micro-Benchmarks/5.7.1-iompi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-iimpi-2021a - - x - x x OSU-Micro-Benchmarks/5.7.1-gompi-2021b x x x - x x OSU-Micro-Benchmarks/5.7-iimpi-2020b - - x x x x OSU-Micro-Benchmarks/5.7-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020b - x x x x x OSU-Micro-Benchmarks/5.6.3-iimpi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-iimpi-2019b - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2020b - - x x x x OSU-Micro-Benchmarks/5.6.3-gompi-2020a - x x - x x OSU-Micro-Benchmarks/5.6.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Oases/", "title": "Oases", "text": ""}, {"location": "available_software/detail/Oases/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Oases installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Oases, load one of these modules using a module load command like:

                  module load Oases/20180312-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Oases/20180312-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/Omnipose/", "title": "Omnipose", "text": ""}, {"location": "available_software/detail/Omnipose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Omnipose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Omnipose, load one of these modules using a module load command like:

                  module load Omnipose/0.4.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Omnipose/0.4.4-foss-2022a-CUDA-11.7.0 x - - - x - Omnipose/0.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/OpenAI-Gym/", "title": "OpenAI-Gym", "text": ""}, {"location": "available_software/detail/OpenAI-Gym/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenAI-Gym installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenAI-Gym, load one of these modules using a module load command like:

                  module load OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenAI-Gym/0.17.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenBLAS/", "title": "OpenBLAS", "text": ""}, {"location": "available_software/detail/OpenBLAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBLAS, load one of these modules using a module load command like:

                  module load OpenBLAS/0.3.24-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x OpenBLAS/0.3.20-GCC-11.3.0 x x x x x x OpenBLAS/0.3.18-GCC-11.2.0 x x x x x x OpenBLAS/0.3.15-GCC-10.3.0 x x x x x x OpenBLAS/0.3.12-GCC-10.2.0 x x x x x x OpenBLAS/0.3.9-GCC-9.3.0 - x x - x x OpenBLAS/0.3.7-GCC-8.3.0 x x x - x x"}, {"location": "available_software/detail/OpenBabel/", "title": "OpenBabel", "text": ""}, {"location": "available_software/detail/OpenBabel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenBabel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenBabel, load one of these modules using a module load command like:

                  module load OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenBabel/3.1.1-iimpi-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/OpenCV/", "title": "OpenCV", "text": ""}, {"location": "available_software/detail/OpenCV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCV, load one of these modules using a module load command like:

                  module load OpenCV/4.6.0-foss-2022a-contrib\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCV/4.6.0-foss-2022a-contrib x x x x x x OpenCV/4.6.0-foss-2022a-CUDA-11.7.0-contrib x - x - x - OpenCV/4.5.5-foss-2021b-contrib x x x - x x OpenCV/4.5.3-foss-2021a-contrib - x x - x x OpenCV/4.5.3-foss-2021a-CUDA-11.3.1-contrib x - - - x - OpenCV/4.5.1-fosscuda-2020b-contrib x - - - x - OpenCV/4.5.1-foss-2020b-contrib - x x - x x OpenCV/4.2.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenCoarrays/", "title": "OpenCoarrays", "text": ""}, {"location": "available_software/detail/OpenCoarrays/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenCoarrays installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenCoarrays, load one of these modules using a module load command like:

                  module load OpenCoarrays/2.8.0-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenCoarrays/2.8.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenEXR/", "title": "OpenEXR", "text": ""}, {"location": "available_software/detail/OpenEXR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenEXR, load one of these modules using a module load command like:

                  module load OpenEXR/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x OpenEXR/3.1.5-GCCcore-11.3.0 x x x x x x OpenEXR/3.1.1-GCCcore-11.2.0 x x x - x x OpenEXR/3.0.1-GCCcore-10.3.0 x x x - x x OpenEXR/2.5.5-GCCcore-10.2.0 x x x x x x OpenEXR/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenFOAM-Extend/", "title": "OpenFOAM-Extend", "text": ""}, {"location": "available_software/detail/OpenFOAM-Extend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM-Extend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM-Extend, load one of these modules using a module load command like:

                  module load OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM-Extend/4.1-20200408-foss-2019b-Python-2.7.16 - x x - x x OpenFOAM-Extend/4.1-20191120-intel-2019b-Python-2.7.16 - x x - x - OpenFOAM-Extend/4.0-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/OpenFOAM/", "title": "OpenFOAM", "text": ""}, {"location": "available_software/detail/OpenFOAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFOAM, load one of these modules using a module load command like:

                  module load OpenFOAM/v2206-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFOAM/v2206-foss-2022a x x x x x x OpenFOAM/v2112-foss-2021b x x x x x x OpenFOAM/v2106-foss-2021a x x x x x x OpenFOAM/v2012-foss-2020a - x x - x x OpenFOAM/v2006-foss-2020a - x x - x x OpenFOAM/v1912-foss-2019b - x x - x x OpenFOAM/v1906-foss-2019b - x x - x x OpenFOAM/10-foss-2023a x x x x x x OpenFOAM/10-foss-2022a x x x x x x OpenFOAM/9-intel-2021a - x x - x x OpenFOAM/9-foss-2021a x x x x x x OpenFOAM/8-intel-2020b - x - - - - OpenFOAM/8-foss-2020b x x x x x x OpenFOAM/8-foss-2020a - x x - x x OpenFOAM/7-foss-2019b-20200508 x x x - x x OpenFOAM/7-foss-2019b - x x - x x OpenFOAM/6-foss-2019b - x x - x x OpenFOAM/5.0-20180606-foss-2019b - x x - x x OpenFOAM/2.3.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/OpenFace/", "title": "OpenFace", "text": ""}, {"location": "available_software/detail/OpenFace/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFace installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFace, load one of these modules using a module load command like:

                  module load OpenFace/2.2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFace/2.2.0-foss-2021a-CUDA-11.3.1 - - - - x - OpenFace/2.2.0-foss-2021a - x x - x x"}, {"location": "available_software/detail/OpenFold/", "title": "OpenFold", "text": ""}, {"location": "available_software/detail/OpenFold/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenFold installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenFold, load one of these modules using a module load command like:

                  module load OpenFold/1.0.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenFold/1.0.1-foss-2022a-CUDA-11.7.0 - - x - - - OpenFold/1.0.1-foss-2021a-CUDA-11.3.1 x - - - x - OpenFold/1.0.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/OpenForceField/", "title": "OpenForceField", "text": ""}, {"location": "available_software/detail/OpenForceField/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenForceField installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenForceField, load one of these modules using a module load command like:

                  module load OpenForceField/0.7.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenForceField/0.7.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenImageIO/", "title": "OpenImageIO", "text": ""}, {"location": "available_software/detail/OpenImageIO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenImageIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenImageIO, load one of these modules using a module load command like:

                  module load OpenImageIO/2.0.12-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenImageIO/2.0.12-iimpi-2019b - x x - x x OpenImageIO/2.0.12-gompi-2019b - x x - x x"}, {"location": "available_software/detail/OpenJPEG/", "title": "OpenJPEG", "text": ""}, {"location": "available_software/detail/OpenJPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenJPEG, load one of these modules using a module load command like:

                  module load OpenJPEG/2.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x OpenJPEG/2.5.0-GCCcore-11.3.0 x x x x x x OpenJPEG/2.4.0-GCCcore-11.2.0 x x x x x x OpenJPEG/2.4.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/OpenMM-PLUMED/", "title": "OpenMM-PLUMED", "text": ""}, {"location": "available_software/detail/OpenMM-PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM-PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM-PLUMED, load one of these modules using a module load command like:

                  module load OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM-PLUMED/1.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMM/", "title": "OpenMM", "text": ""}, {"location": "available_software/detail/OpenMM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMM, load one of these modules using a module load command like:

                  module load OpenMM/8.0.0-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMM/8.0.0-foss-2022a-CUDA-11.7.0 x - - - x - OpenMM/8.0.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2022a-CUDA-11.7.0 - - x - - - OpenMM/7.7.0-foss-2022a x x x x x x OpenMM/7.7.0-foss-2021a-CUDA-11.3.1 x - - - x - OpenMM/7.7.0-foss-2021a x x x - x x OpenMM/7.5.1-fosscuda-2020b x - - - x - OpenMM/7.5.1-foss-2021b-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021b-CUDA-11.4.1-DeepMind-patch x - - - x - OpenMM/7.5.1-foss-2021a-DeepMind-patch x - x - x - OpenMM/7.5.1-foss-2021a-CUDA-11.3.1-DeepMind-patch x - - - x - OpenMM/7.5.0-intel-2020b - x x - x x OpenMM/7.5.0-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.5.0-fosscuda-2020b x - - - x - OpenMM/7.5.0-foss-2020b x x x x x x OpenMM/7.4.2-intel-2020a-Python-3.8.2 - x x - x x OpenMM/7.4.1-intel-2019b-Python-3.7.4 - x x - x x OpenMM/7.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenMMTools/", "title": "OpenMMTools", "text": ""}, {"location": "available_software/detail/OpenMMTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMMTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMMTools, load one of these modules using a module load command like:

                  module load OpenMMTools/0.20.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMMTools/0.20.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenMPI/", "title": "OpenMPI", "text": ""}, {"location": "available_software/detail/OpenMPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMPI, load one of these modules using a module load command like:

                  module load OpenMPI/4.1.6-GCC-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMPI/4.1.6-GCC-13.2.0 x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x OpenMPI/4.1.4-GCC-11.3.0 x x x x x x OpenMPI/4.1.1-intel-compilers-2021.2.0 x x x x x x OpenMPI/4.1.1-GCC-11.2.0 x x x x x x OpenMPI/4.1.1-GCC-10.3.0 x x x x x x OpenMPI/4.0.5-iccifort-2020.4.304 x x x x x x OpenMPI/4.0.5-gcccuda-2020b x x x x x x OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1 x - x - x - OpenMPI/4.0.5-GCC-10.2.0 x x x x x x OpenMPI/4.0.3-iccifort-2020.1.217 - x - - - - OpenMPI/4.0.3-GCC-9.3.0 - x x x x x OpenMPI/3.1.4-GCC-8.3.0-ucx - x - - - - OpenMPI/3.1.4-GCC-8.3.0 x x x x x x"}, {"location": "available_software/detail/OpenMolcas/", "title": "OpenMolcas", "text": ""}, {"location": "available_software/detail/OpenMolcas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenMolcas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenMolcas, load one of these modules using a module load command like:

                  module load OpenMolcas/21.06-iomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenMolcas/21.06-iomkl-2021a x x x x x x OpenMolcas/21.06-intel-2021a - x x - x x"}, {"location": "available_software/detail/OpenPGM/", "title": "OpenPGM", "text": ""}, {"location": "available_software/detail/OpenPGM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPGM, load one of these modules using a module load command like:

                  module load OpenPGM/5.2.122-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-12.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-11.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-10.2.0 x x x x x x OpenPGM/5.2.122-GCCcore-9.3.0 x x x x x x OpenPGM/5.2.122-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/OpenPIV/", "title": "OpenPIV", "text": ""}, {"location": "available_software/detail/OpenPIV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenPIV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenPIV, load one of these modules using a module load command like:

                  module load OpenPIV/0.21.8-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenPIV/0.21.8-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/OpenSSL/", "title": "OpenSSL", "text": ""}, {"location": "available_software/detail/OpenSSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSSL, load one of these modules using a module load command like:

                  module load OpenSSL/1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSSL/1.1 x x x x x x"}, {"location": "available_software/detail/OpenSees/", "title": "OpenSees", "text": ""}, {"location": "available_software/detail/OpenSees/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSees installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSees, load one of these modules using a module load command like:

                  module load OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSees/3.2.0-intel-2020a-Python-3.8.2-parallel - x x - x x OpenSees/3.2.0-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/OpenSlide-Java/", "title": "OpenSlide-Java", "text": ""}, {"location": "available_software/detail/OpenSlide-Java/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide-Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide-Java, load one of these modules using a module load command like:

                  module load OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide-Java/0.12.4-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/OpenSlide/", "title": "OpenSlide", "text": ""}, {"location": "available_software/detail/OpenSlide/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OpenSlide installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OpenSlide, load one of these modules using a module load command like:

                  module load OpenSlide/3.4.1-GCCcore-12.3.0-largefiles\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OpenSlide/3.4.1-GCCcore-12.3.0-largefiles x x x x x x OpenSlide/3.4.1-GCCcore-11.3.0-largefiles x - x - x - OpenSlide/3.4.1-GCCcore-11.2.0 x x x - x x OpenSlide/3.4.1-GCCcore-10.3.0-largefiles x x x - x x"}, {"location": "available_software/detail/Optuna/", "title": "Optuna", "text": ""}, {"location": "available_software/detail/Optuna/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Optuna installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Optuna, load one of these modules using a module load command like:

                  module load Optuna/3.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Optuna/3.1.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/OrthoFinder/", "title": "OrthoFinder", "text": ""}, {"location": "available_software/detail/OrthoFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which OrthoFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using OrthoFinder, load one of these modules using a module load command like:

                  module load OrthoFinder/2.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty OrthoFinder/2.5.5-foss-2023a x x x x x x OrthoFinder/2.5.4-foss-2020b - x x x x x OrthoFinder/2.5.2-foss-2020b - x x x x x OrthoFinder/2.3.11-intel-2019b-Python-3.7.4 - x x - x x OrthoFinder/2.3.8-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Osi/", "title": "Osi", "text": ""}, {"location": "available_software/detail/Osi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Osi, load one of these modules using a module load command like:

                  module load Osi/0.108.9-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Osi/0.108.9-GCC-12.3.0 x x x x x x Osi/0.108.8-GCC-12.2.0 x x x x x x Osi/0.108.7-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PASA/", "title": "PASA", "text": ""}, {"location": "available_software/detail/PASA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PASA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PASA, load one of these modules using a module load command like:

                  module load PASA/2.5.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PASA/2.5.3-foss-2022b x x x x x x"}, {"location": "available_software/detail/PBGZIP/", "title": "PBGZIP", "text": ""}, {"location": "available_software/detail/PBGZIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PBGZIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PBGZIP, load one of these modules using a module load command like:

                  module load PBGZIP/20160804-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PBGZIP/20160804-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PCRE/", "title": "PCRE", "text": ""}, {"location": "available_software/detail/PCRE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE, load one of these modules using a module load command like:

                  module load PCRE/8.45-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE/8.45-GCCcore-12.3.0 x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x PCRE/8.45-GCCcore-11.3.0 x x x x x x PCRE/8.45-GCCcore-11.2.0 x x x x x x PCRE/8.44-GCCcore-10.3.0 x x x x x x PCRE/8.44-GCCcore-10.2.0 x x x x x x PCRE/8.44-GCCcore-9.3.0 x x x x x x PCRE/8.43-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/PCRE2/", "title": "PCRE2", "text": ""}, {"location": "available_software/detail/PCRE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PCRE2, load one of these modules using a module load command like:

                  module load PCRE2/10.42-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PCRE2/10.42-GCCcore-12.3.0 x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x PCRE2/10.40-GCCcore-11.3.0 x x x x x x PCRE2/10.37-GCCcore-11.2.0 x x x x x x PCRE2/10.36-GCCcore-10.3.0 x x x x x x PCRE2/10.36 - x x - x - PCRE2/10.35-GCCcore-10.2.0 x x x x x x PCRE2/10.34-GCCcore-9.3.0 - x x - x x PCRE2/10.33-GCCcore-8.3.0 x x x - x x PCRE2/10.32 - - x - x -"}, {"location": "available_software/detail/PEAR/", "title": "PEAR", "text": ""}, {"location": "available_software/detail/PEAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PEAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PEAR, load one of these modules using a module load command like:

                  module load PEAR/0.9.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PEAR/0.9.11-GCCcore-9.3.0 - x x - x x PEAR/0.9.11-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/PETSc/", "title": "PETSc", "text": ""}, {"location": "available_software/detail/PETSc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PETSc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PETSc, load one of these modules using a module load command like:

                  module load PETSc/3.18.4-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PETSc/3.18.4-intel-2021b x x x x x x PETSc/3.17.4-foss-2022a x x x x x x PETSc/3.15.1-foss-2021a - x x - x x PETSc/3.14.4-foss-2020b - x x x x x PETSc/3.12.4-intel-2019b-Python-3.7.4 - - x - x - PETSc/3.12.4-intel-2019b-Python-2.7.16 - x x - x x PETSc/3.12.4-foss-2020a-Python-3.8.2 - x x - x x PETSc/3.12.4-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/PHYLIP/", "title": "PHYLIP", "text": ""}, {"location": "available_software/detail/PHYLIP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PHYLIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PHYLIP, load one of these modules using a module load command like:

                  module load PHYLIP/3.697-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PHYLIP/3.697-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/PICRUSt2/", "title": "PICRUSt2", "text": ""}, {"location": "available_software/detail/PICRUSt2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PICRUSt2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PICRUSt2, load one of these modules using a module load command like:

                  module load PICRUSt2/2.5.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PICRUSt2/2.5.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/PLAMS/", "title": "PLAMS", "text": ""}, {"location": "available_software/detail/PLAMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLAMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLAMS, load one of these modules using a module load command like:

                  module load PLAMS/1.5.1-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLAMS/1.5.1-intel-2022a x x x x x x"}, {"location": "available_software/detail/PLINK/", "title": "PLINK", "text": ""}, {"location": "available_software/detail/PLINK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLINK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLINK, load one of these modules using a module load command like:

                  module load PLINK/2.00a3.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLINK/2.00a3.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/PLUMED/", "title": "PLUMED", "text": ""}, {"location": "available_software/detail/PLUMED/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLUMED, load one of these modules using a module load command like:

                  module load PLUMED/2.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLUMED/2.9.0-foss-2023a x x x x x x PLUMED/2.9.0-foss-2022b x x x x x x PLUMED/2.8.1-foss-2022a x x x x x x PLUMED/2.7.3-foss-2021b x x x - x x PLUMED/2.7.2-foss-2021a x x x x x x PLUMED/2.6.2-intelcuda-2020b - - - - x - PLUMED/2.6.2-intel-2020b - x x - x - PLUMED/2.6.2-foss-2020b - x x x x x PLUMED/2.6.0-iomkl-2020a-Python-3.8.2 - x - - - - PLUMED/2.6.0-intel-2020a-Python-3.8.2 - x x - x x PLUMED/2.6.0-foss-2020a-Python-3.8.2 - x x - x x PLUMED/2.5.3-intel-2019b-Python-3.7.4 - x x - x x PLUMED/2.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PLY/", "title": "PLY", "text": ""}, {"location": "available_software/detail/PLY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PLY, load one of these modules using a module load command like:

                  module load PLY/3.11-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PLY/3.11-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PMIx/", "title": "PMIx", "text": ""}, {"location": "available_software/detail/PMIx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PMIx, load one of these modules using a module load command like:

                  module load PMIx/4.2.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PMIx/4.2.6-GCCcore-13.2.0 x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x PMIx/4.1.2-GCCcore-11.3.0 x x x x x x PMIx/4.1.0-GCCcore-11.2.0 x x x x x x PMIx/3.2.3-GCCcore-10.3.0 x x x x x x PMIx/3.1.5-GCCcore-10.2.0 x x x x x x PMIx/3.1.5-GCCcore-9.3.0 x x x x x x PMIx/3.1.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/POT/", "title": "POT", "text": ""}, {"location": "available_software/detail/POT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POT, load one of these modules using a module load command like:

                  module load POT/0.9.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POT/0.9.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/POV-Ray/", "title": "POV-Ray", "text": ""}, {"location": "available_software/detail/POV-Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which POV-Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using POV-Ray, load one of these modules using a module load command like:

                  module load POV-Ray/3.7.0.8-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty POV-Ray/3.7.0.8-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/PPanGGOLiN/", "title": "PPanGGOLiN", "text": ""}, {"location": "available_software/detail/PPanGGOLiN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PPanGGOLiN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PPanGGOLiN, load one of these modules using a module load command like:

                  module load PPanGGOLiN/1.1.136-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PPanGGOLiN/1.1.136-foss-2021b x x x - x x"}, {"location": "available_software/detail/PRANK/", "title": "PRANK", "text": ""}, {"location": "available_software/detail/PRANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRANK, load one of these modules using a module load command like:

                  module load PRANK/170427-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRANK/170427-GCC-10.2.0 - x x x x x PRANK/170427-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/PRINSEQ/", "title": "PRINSEQ", "text": ""}, {"location": "available_software/detail/PRINSEQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRINSEQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRINSEQ, load one of these modules using a module load command like:

                  module load PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRINSEQ/0.20.4-foss-2021b-Perl-5.34.0 x x x - x x PRINSEQ/0.20.4-foss-2020b-Perl-5.32.0 - x x x x -"}, {"location": "available_software/detail/PRISMS-PF/", "title": "PRISMS-PF", "text": ""}, {"location": "available_software/detail/PRISMS-PF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PRISMS-PF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PRISMS-PF, load one of these modules using a module load command like:

                  module load PRISMS-PF/2.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PRISMS-PF/2.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/PROJ/", "title": "PROJ", "text": ""}, {"location": "available_software/detail/PROJ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PROJ, load one of these modules using a module load command like:

                  module load PROJ/9.2.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PROJ/9.2.0-GCCcore-12.3.0 x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x PROJ/9.0.0-GCCcore-11.3.0 x x x x x x PROJ/8.1.0-GCCcore-11.2.0 x x x x x x PROJ/8.0.1-GCCcore-10.3.0 x x x x x x PROJ/7.2.1-GCCcore-10.2.0 - x x x x x PROJ/7.0.0-GCCcore-9.3.0 - x x - x x PROJ/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pandoc/", "title": "Pandoc", "text": ""}, {"location": "available_software/detail/Pandoc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pandoc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pandoc, load one of these modules using a module load command like:

                  module load Pandoc/2.13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pandoc/2.13 - x x x x x"}, {"location": "available_software/detail/Pango/", "title": "Pango", "text": ""}, {"location": "available_software/detail/Pango/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pango, load one of these modules using a module load command like:

                  module load Pango/1.50.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pango/1.50.14-GCCcore-12.3.0 x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x Pango/1.50.7-GCCcore-11.3.0 x x x x x x Pango/1.48.8-GCCcore-11.2.0 x x x x x x Pango/1.48.5-GCCcore-10.3.0 x x x x x x Pango/1.47.0-GCCcore-10.2.0 x x x x x x Pango/1.44.7-GCCcore-9.3.0 - x x - x x Pango/1.44.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/ParMETIS/", "title": "ParMETIS", "text": ""}, {"location": "available_software/detail/ParMETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMETIS, load one of these modules using a module load command like:

                  module load ParMETIS/4.0.3-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMETIS/4.0.3-iimpi-2020a - x x - x x ParMETIS/4.0.3-iimpi-2019b - x x - x x ParMETIS/4.0.3-gompi-2022a x x x x x x ParMETIS/4.0.3-gompi-2021a - x x - x x ParMETIS/4.0.3-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParMGridGen/", "title": "ParMGridGen", "text": ""}, {"location": "available_software/detail/ParMGridGen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParMGridGen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParMGridGen, load one of these modules using a module load command like:

                  module load ParMGridGen/1.0-iimpi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParMGridGen/1.0-iimpi-2019b - x x - x x ParMGridGen/1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/ParaView/", "title": "ParaView", "text": ""}, {"location": "available_software/detail/ParaView/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParaView, load one of these modules using a module load command like:

                  module load ParaView/5.11.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParaView/5.11.2-foss-2023a x x x x x x ParaView/5.10.1-foss-2022a-mpi x x x x x x ParaView/5.9.1-intel-2021a-mpi - x x - x x ParaView/5.9.1-foss-2021b-mpi x x x x x x ParaView/5.9.1-foss-2021a-mpi x x x x x x ParaView/5.8.1-intel-2020b-mpi - x - - - - ParaView/5.8.1-foss-2020b-mpi x x x x x x ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi - x x - x x ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi x x x - x x ParaView/5.4.1-foss-2019b-Python-2.7.16-mpi - x x - x x"}, {"location": "available_software/detail/ParmEd/", "title": "ParmEd", "text": ""}, {"location": "available_software/detail/ParmEd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ParmEd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ParmEd, load one of these modules using a module load command like:

                  module load ParmEd/3.2.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ParmEd/3.2.0-intel-2020a-Python-3.8.2 - x x - x x ParmEd/3.2.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Parsl/", "title": "Parsl", "text": ""}, {"location": "available_software/detail/Parsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Parsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Parsl, load one of these modules using a module load command like:

                  module load Parsl/2023.7.17-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Parsl/2023.7.17-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PartitionFinder/", "title": "PartitionFinder", "text": ""}, {"location": "available_software/detail/PartitionFinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PartitionFinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PartitionFinder, load one of these modules using a module load command like:

                  module load PartitionFinder/2.1.1-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PartitionFinder/2.1.1-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/Perl-bundle-CPAN/", "title": "Perl-bundle-CPAN", "text": ""}, {"location": "available_software/detail/Perl-bundle-CPAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl-bundle-CPAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

                  module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Perl/", "title": "Perl", "text": ""}, {"location": "available_software/detail/Perl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Perl, load one of these modules using a module load command like:

                  module load Perl/5.38.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Perl/5.38.0-GCCcore-13.2.0 x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x Perl/5.34.1-GCCcore-11.3.0-minimal x x x x x x Perl/5.34.1-GCCcore-11.3.0 x x x x x x Perl/5.34.0-GCCcore-11.2.0-minimal x x x x x x Perl/5.34.0-GCCcore-11.2.0 x x x x x x Perl/5.32.1-GCCcore-10.3.0-minimal x x x x x x Perl/5.32.1-GCCcore-10.3.0 x x x x x x Perl/5.32.0-GCCcore-10.2.0-minimal x x x x x x Perl/5.32.0-GCCcore-10.2.0 x x x x x x Perl/5.30.2-GCCcore-9.3.0-minimal x x x x x x Perl/5.30.2-GCCcore-9.3.0 x x x x x x Perl/5.30.0-GCCcore-8.3.0-minimal x x x x x x Perl/5.30.0-GCCcore-8.3.0 x x x x x x Perl/5.28.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Phenoflow/", "title": "Phenoflow", "text": ""}, {"location": "available_software/detail/Phenoflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Phenoflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Phenoflow, load one of these modules using a module load command like:

                  module load Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Phenoflow/1.1.2-20200917-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/PhyloPhlAn/", "title": "PhyloPhlAn", "text": ""}, {"location": "available_software/detail/PhyloPhlAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PhyloPhlAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PhyloPhlAn, load one of these modules using a module load command like:

                  module load PhyloPhlAn/3.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PhyloPhlAn/3.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Pillow-SIMD/", "title": "Pillow-SIMD", "text": ""}, {"location": "available_software/detail/Pillow-SIMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow-SIMD, load one of these modules using a module load command like:

                  module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x Pillow-SIMD/9.5.0-GCCcore-12.2.0 x x x x x x Pillow-SIMD/9.2.0-GCCcore-11.3.0 x x x x x x Pillow-SIMD/8.2.0-GCCcore-10.3.0 x x x - x x Pillow-SIMD/7.1.2-GCCcore-10.2.0 x x x x x x Pillow-SIMD/6.0.x.post0-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/Pillow/", "title": "Pillow", "text": ""}, {"location": "available_software/detail/Pillow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pillow, load one of these modules using a module load command like:

                  module load Pillow/10.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pillow/10.2.0-GCCcore-13.2.0 x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x Pillow/9.1.1-GCCcore-11.3.0 x x x x x x Pillow/8.3.2-GCCcore-11.2.0 x x x x x x Pillow/8.3.1-GCCcore-11.2.0 x x x - x x Pillow/8.2.0-GCCcore-10.3.0 x x x x x x Pillow/8.0.1-GCCcore-10.2.0 x x x x x x Pillow/7.0.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x Pillow/6.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Pilon/", "title": "Pilon", "text": ""}, {"location": "available_software/detail/Pilon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pilon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pilon, load one of these modules using a module load command like:

                  module load Pilon/1.23-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pilon/1.23-Java-11 x x x x x x Pilon/1.23-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Pint/", "title": "Pint", "text": ""}, {"location": "available_software/detail/Pint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pint, load one of these modules using a module load command like:

                  module load Pint/0.22-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pint/0.22-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PnetCDF/", "title": "PnetCDF", "text": ""}, {"location": "available_software/detail/PnetCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PnetCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PnetCDF, load one of these modules using a module load command like:

                  module load PnetCDF/1.12.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PnetCDF/1.12.3-gompi-2022a x - x - x - PnetCDF/1.12.3-gompi-2021b x x x - x x"}, {"location": "available_software/detail/Porechop/", "title": "Porechop", "text": ""}, {"location": "available_software/detail/Porechop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Porechop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Porechop, load one of these modules using a module load command like:

                  module load Porechop/0.2.4-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Porechop/0.2.4-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PostgreSQL/", "title": "PostgreSQL", "text": ""}, {"location": "available_software/detail/PostgreSQL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PostgreSQL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PostgreSQL, load one of these modules using a module load command like:

                  module load PostgreSQL/16.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x PostgreSQL/14.4-GCCcore-11.3.0 x x x x x x PostgreSQL/13.4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/Primer3/", "title": "Primer3", "text": ""}, {"location": "available_software/detail/Primer3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Primer3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Primer3, load one of these modules using a module load command like:

                  module load Primer3/2.5.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Primer3/2.5.0-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/ProBiS/", "title": "ProBiS", "text": ""}, {"location": "available_software/detail/ProBiS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProBiS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProBiS, load one of these modules using a module load command like:

                  module load ProBiS/20230403-gompi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProBiS/20230403-gompi-2022b x x x x x x"}, {"location": "available_software/detail/ProtHint/", "title": "ProtHint", "text": ""}, {"location": "available_software/detail/ProtHint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ProtHint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ProtHint, load one of these modules using a module load command like:

                  module load ProtHint/2.6.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ProtHint/2.6.0-GCC-11.3.0 x x x x x x ProtHint/2.6.0-GCC-11.2.0 x x x x x x ProtHint/2.6.0-GCC-10.2.0 x x x x x x ProtHint/2.4.0-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/PsiCLASS/", "title": "PsiCLASS", "text": ""}, {"location": "available_software/detail/PsiCLASS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PsiCLASS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PsiCLASS, load one of these modules using a module load command like:

                  module load PsiCLASS/1.0.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PsiCLASS/1.0.3-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/PuLP/", "title": "PuLP", "text": ""}, {"location": "available_software/detail/PuLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PuLP, load one of these modules using a module load command like:

                  module load PuLP/2.8.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PuLP/2.8.0-foss-2023a x x x x x x PuLP/2.7.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/PyBerny/", "title": "PyBerny", "text": ""}, {"location": "available_software/detail/PyBerny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyBerny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyBerny, load one of these modules using a module load command like:

                  module load PyBerny/0.6.3-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyBerny/0.6.3-foss-2022b x x x x x x PyBerny/0.6.3-foss-2022a - x x x x x PyBerny/0.6.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyCairo/", "title": "PyCairo", "text": ""}, {"location": "available_software/detail/PyCairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCairo, load one of these modules using a module load command like:

                  module load PyCairo/1.21.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCairo/1.21.0-GCCcore-11.3.0 x x x x x x PyCairo/1.20.1-GCCcore-11.2.0 x x x x x x PyCairo/1.20.1-GCCcore-10.3.0 x x x x x x PyCairo/1.20.0-GCCcore-10.2.0 - x x x x x PyCairo/1.18.2-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/PyCalib/", "title": "PyCalib", "text": ""}, {"location": "available_software/detail/PyCalib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCalib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCalib, load one of these modules using a module load command like:

                  module load PyCalib/20230531-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCalib/20230531-gfbf-2022b x x x x x x PyCalib/0.1.0.dev0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyCheMPS2/", "title": "PyCheMPS2", "text": ""}, {"location": "available_software/detail/PyCheMPS2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyCheMPS2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyCheMPS2, load one of these modules using a module load command like:

                  module load PyCheMPS2/1.8.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyCheMPS2/1.8.12-foss-2022b x x x x x x PyCheMPS2/1.8.12-foss-2022a - x x x x x"}, {"location": "available_software/detail/PyFoam/", "title": "PyFoam", "text": ""}, {"location": "available_software/detail/PyFoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyFoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyFoam, load one of these modules using a module load command like:

                  module load PyFoam/2020.5-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyFoam/2020.5-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyGEOS/", "title": "PyGEOS", "text": ""}, {"location": "available_software/detail/PyGEOS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGEOS, load one of these modules using a module load command like:

                  module load PyGEOS/0.8-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGEOS/0.8-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyGObject/", "title": "PyGObject", "text": ""}, {"location": "available_software/detail/PyGObject/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyGObject installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyGObject, load one of these modules using a module load command like:

                  module load PyGObject/3.42.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyGObject/3.42.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyInstaller/", "title": "PyInstaller", "text": ""}, {"location": "available_software/detail/PyInstaller/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyInstaller installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyInstaller, load one of these modules using a module load command like:

                  module load PyInstaller/6.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyInstaller/6.3.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/PyKeOps/", "title": "PyKeOps", "text": ""}, {"location": "available_software/detail/PyKeOps/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyKeOps installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyKeOps, load one of these modules using a module load command like:

                  module load PyKeOps/2.0-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyKeOps/2.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/PyMC/", "title": "PyMC", "text": ""}, {"location": "available_software/detail/PyMC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC, load one of these modules using a module load command like:

                  module load PyMC/5.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC/5.9.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/PyMC3/", "title": "PyMC3", "text": ""}, {"location": "available_software/detail/PyMC3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMC3, load one of these modules using a module load command like:

                  module load PyMC3/3.11.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMC3/3.11.1-intel-2021b x x x - x x PyMC3/3.11.1-intel-2020b - - x - x x PyMC3/3.11.1-fosscuda-2020b - - - - x - PyMC3/3.8-intel-2019b-Python-3.7.4 - - x - x x PyMC3/3.8-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyMDE/", "title": "PyMDE", "text": ""}, {"location": "available_software/detail/PyMDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMDE, load one of these modules using a module load command like:

                  module load PyMDE/0.1.18-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMDE/0.1.18-foss-2022a-CUDA-11.7.0 x - x - x - PyMDE/0.1.18-foss-2022a x x x x x x"}, {"location": "available_software/detail/PyMOL/", "title": "PyMOL", "text": ""}, {"location": "available_software/detail/PyMOL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyMOL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyMOL, load one of these modules using a module load command like:

                  module load PyMOL/2.5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyMOL/2.5.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/PyOD/", "title": "PyOD", "text": ""}, {"location": "available_software/detail/PyOD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOD, load one of these modules using a module load command like:

                  module load PyOD/0.8.7-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOD/0.8.7-intel-2020b - x x - x x PyOD/0.8.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenCL/", "title": "PyOpenCL", "text": ""}, {"location": "available_software/detail/PyOpenCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenCL, load one of these modules using a module load command like:

                  module load PyOpenCL/2023.1.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenCL/2023.1.4-foss-2023a x x x x x x PyOpenCL/2023.1.4-foss-2022a-CUDA-11.7.0 x - - - x - PyOpenCL/2023.1.4-foss-2022a x x x x x x PyOpenCL/2021.2.13-foss-2021b-CUDA-11.4.1 x - - - x - PyOpenCL/2021.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyOpenGL/", "title": "PyOpenGL", "text": ""}, {"location": "available_software/detail/PyOpenGL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyOpenGL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyOpenGL, load one of these modules using a module load command like:

                  module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.3.0 x x x x x x PyOpenGL/3.1.6-GCCcore-11.2.0 x x x - x x PyOpenGL/3.1.5-GCCcore-10.3.0 - x x - x x PyOpenGL/3.1.5-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/PyPy/", "title": "PyPy", "text": ""}, {"location": "available_software/detail/PyPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyPy, load one of these modules using a module load command like:

                  module load PyPy/7.3.12-3.10\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyPy/7.3.12-3.10 x x x x x x"}, {"location": "available_software/detail/PyQt5/", "title": "PyQt5", "text": ""}, {"location": "available_software/detail/PyQt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQt5, load one of these modules using a module load command like:

                  module load PyQt5/5.15.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQt5/5.15.7-GCCcore-12.2.0 x x x x x x PyQt5/5.15.5-GCCcore-11.3.0 x x x x x x PyQt5/5.15.4-GCCcore-11.2.0 x x x x x x PyQt5/5.15.4-GCCcore-10.3.0 - x x - x x PyQt5/5.15.1-GCCcore-10.2.0 x x x x x x PyQt5/5.15.1-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/PyQtGraph/", "title": "PyQtGraph", "text": ""}, {"location": "available_software/detail/PyQtGraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyQtGraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyQtGraph, load one of these modules using a module load command like:

                  module load PyQtGraph/0.13.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyQtGraph/0.13.3-foss-2022a x x x x x x PyQtGraph/0.12.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/PyRETIS/", "title": "PyRETIS", "text": ""}, {"location": "available_software/detail/PyRETIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRETIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRETIS, load one of these modules using a module load command like:

                  module load PyRETIS/2.5.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRETIS/2.5.0-intel-2020b - x x - x x PyRETIS/2.5.0-intel-2020a-Python-3.8.2 - - x - x x PyRETIS/2.5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/PyRe/", "title": "PyRe", "text": ""}, {"location": "available_software/detail/PyRe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyRe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyRe, load one of these modules using a module load command like:

                  module load PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyRe/5.0.3-20190221-intel-2019b-Python-3.7.4 - x - - - x PyRe/5.0.3-20190221-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PySCF/", "title": "PySCF", "text": ""}, {"location": "available_software/detail/PySCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PySCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PySCF, load one of these modules using a module load command like:

                  module load PySCF/2.4.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PySCF/2.4.0-foss-2022b x x x x x x PySCF/2.1.1-foss-2022a - x x x x x PySCF/1.7.6-gomkl-2021a x x x - x x PySCF/1.7.6-foss-2021a x x x - x x"}, {"location": "available_software/detail/PyStan/", "title": "PyStan", "text": ""}, {"location": "available_software/detail/PyStan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyStan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyStan, load one of these modules using a module load command like:

                  module load PyStan/2.19.1.1-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyStan/2.19.1.1-intel-2020b - x x - x x"}, {"location": "available_software/detail/PyTables/", "title": "PyTables", "text": ""}, {"location": "available_software/detail/PyTables/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTables installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTables, load one of these modules using a module load command like:

                  module load PyTables/3.8.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTables/3.8.0-foss-2022a x x x x x x PyTables/3.6.1-intel-2020b - x x - x x PyTables/3.6.1-intel-2020a-Python-3.8.2 x x x x x x PyTables/3.6.1-fosscuda-2020b - - - - x - PyTables/3.6.1-foss-2021b x x x x x x PyTables/3.6.1-foss-2021a x x x x x x PyTables/3.6.1-foss-2020b - x x x x x PyTables/3.6.1-foss-2020a-Python-3.8.2 - x x - x x PyTables/3.6.1-foss-2019b-Python-3.7.4 - x x - x x PyTables/3.5.2-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/PyTensor/", "title": "PyTensor", "text": ""}, {"location": "available_software/detail/PyTensor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTensor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTensor, load one of these modules using a module load command like:

                  module load PyTensor/2.17.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTensor/2.17.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/PyTorch-Geometric/", "title": "PyTorch-Geometric", "text": ""}, {"location": "available_software/detail/PyTorch-Geometric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Geometric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Geometric, load one of these modules using a module load command like:

                  module load PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Geometric/1.7.0-fosscuda-2020b-numba-0.53.1 - - - - x - PyTorch-Geometric/1.7.0-foss-2020b-numba-0.53.1 - x x - x x PyTorch-Geometric/1.6.3-fosscuda-2020b - - - - x - PyTorch-Geometric/1.4.2-foss-2019b-Python-3.7.4-PyTorch-1.4.0 - x x - x x PyTorch-Geometric/1.3.2-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/PyTorch-Ignite/", "title": "PyTorch-Ignite", "text": ""}, {"location": "available_software/detail/PyTorch-Ignite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Ignite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Ignite, load one of these modules using a module load command like:

                  module load PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Ignite/0.4.12-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/PyTorch-Lightning/", "title": "PyTorch-Lightning", "text": ""}, {"location": "available_software/detail/PyTorch-Lightning/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch-Lightning installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch-Lightning, load one of these modules using a module load command like:

                  module load PyTorch-Lightning/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch-Lightning/2.1.3-foss-2023a x x x x x x PyTorch-Lightning/2.1.2-foss-2022b x x x x x x PyTorch-Lightning/1.8.4-foss-2022a-CUDA-11.7.0 x - - - x - PyTorch-Lightning/1.8.4-foss-2022a x x x x x x PyTorch-Lightning/1.7.7-foss-2022a-CUDA-11.7.0 - - x - - - PyTorch-Lightning/1.5.9-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch-Lightning/1.5.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/PyTorch/", "title": "PyTorch", "text": ""}, {"location": "available_software/detail/PyTorch/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyTorch, load one of these modules using a module load command like:

                  module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 x - x - x - PyTorch/2.1.2-foss-2023a x x x x x x PyTorch/1.13.1-foss-2022b x x x x x x PyTorch/1.13.1-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.1-foss-2022a-CUDA-11.7.0 - - x - x - PyTorch/1.12.1-foss-2022a x x x x - x PyTorch/1.12.1-foss-2021b - x x x x x PyTorch/1.12.0-foss-2022a-CUDA-11.7.0 x - x - x - PyTorch/1.12.0-foss-2022a x x x x x x PyTorch/1.11.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-fosscuda-2020b x - - - - - PyTorch/1.10.0-foss-2021a-CUDA-11.3.1 x - - - x - PyTorch/1.10.0-foss-2021a x x x x x x PyTorch/1.9.0-fosscuda-2020b x - - - - - PyTorch/1.8.1-fosscuda-2020b x - - - - - PyTorch/1.7.1-fosscuda-2020b x - - - x - PyTorch/1.7.1-foss-2020b - x x x x x PyTorch/1.6.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.4.0-foss-2019b-Python-3.7.4 - x x - x x PyTorch/1.3.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyVCF/", "title": "PyVCF", "text": ""}, {"location": "available_software/detail/PyVCF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF, load one of these modules using a module load command like:

                  module load PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF/0.6.8-GCC-8.3.0-Python-2.7.16 - - x - x - PyVCF/0.6.8-GCC-8.3.0 - x - - - -"}, {"location": "available_software/detail/PyVCF3/", "title": "PyVCF3", "text": ""}, {"location": "available_software/detail/PyVCF3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyVCF3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyVCF3, load one of these modules using a module load command like:

                  module load PyVCF3/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyVCF3/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PyWBGT/", "title": "PyWBGT", "text": ""}, {"location": "available_software/detail/PyWBGT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWBGT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWBGT, load one of these modules using a module load command like:

                  module load PyWBGT/1.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWBGT/1.0.0-foss-2022a x x x x x x PyWBGT/1.0.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/PyWavelets/", "title": "PyWavelets", "text": ""}, {"location": "available_software/detail/PyWavelets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyWavelets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyWavelets, load one of these modules using a module load command like:

                  module load PyWavelets/1.1.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyWavelets/1.1.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/PyYAML/", "title": "PyYAML", "text": ""}, {"location": "available_software/detail/PyYAML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyYAML, load one of these modules using a module load command like:

                  module load PyYAML/6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyYAML/6.0-GCCcore-12.3.0 x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x PyYAML/6.0-GCCcore-11.3.0 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x PyYAML/5.4.1-GCCcore-11.2.0 x x x x x x PyYAML/5.4.1-GCCcore-10.3.0 x x x x x x PyYAML/5.3.1-GCCcore-10.2.0 x x x x x x PyYAML/5.3-GCCcore-9.3.0 x x x x x x PyYAML/5.1.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/PyZMQ/", "title": "PyZMQ", "text": ""}, {"location": "available_software/detail/PyZMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PyZMQ, load one of these modules using a module load command like:

                  module load PyZMQ/25.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x PyZMQ/24.0.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/PycURL/", "title": "PycURL", "text": ""}, {"location": "available_software/detail/PycURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which PycURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using PycURL, load one of these modules using a module load command like:

                  module load PycURL/7.45.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty PycURL/7.45.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/Pychopper/", "title": "Pychopper", "text": ""}, {"location": "available_software/detail/Pychopper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pychopper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pychopper, load one of these modules using a module load command like:

                  module load Pychopper/2.3.1-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pychopper/2.3.1-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Pyomo/", "title": "Pyomo", "text": ""}, {"location": "available_software/detail/Pyomo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pyomo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pyomo, load one of these modules using a module load command like:

                  module load Pyomo/6.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pyomo/6.4.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/Pysam/", "title": "Pysam", "text": ""}, {"location": "available_software/detail/Pysam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Pysam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Pysam, load one of these modules using a module load command like:

                  module load Pysam/0.22.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Pysam/0.22.0-GCC-12.3.0 x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x Pysam/0.19.1-GCC-11.3.0 x x x x x x Pysam/0.18.0-GCC-11.2.0 x x x - x x Pysam/0.17.0-GCC-11.2.0-Python-2.7.18 x x x x x x Pysam/0.17.0-GCC-11.2.0 x x x - x x Pysam/0.16.0.1-iccifort-2020.4.304 - x x x x x Pysam/0.16.0.1-iccifort-2020.1.217 - x x - x x Pysam/0.16.0.1-GCC-10.3.0 x x x x x x Pysam/0.16.0.1-GCC-10.2.0-Python-2.7.18 - x x x x x Pysam/0.16.0.1-GCC-10.2.0 x x x x x x Pysam/0.16.0.1-GCC-9.3.0 - x x - x x Pysam/0.16.0.1-GCC-8.3.0 - x x - x x Pysam/0.15.3-iccifort-2019.5.281 - x x - x x Pysam/0.15.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Python-bundle-PyPI/", "title": "Python-bundle-PyPI", "text": ""}, {"location": "available_software/detail/Python-bundle-PyPI/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python-bundle-PyPI, load one of these modules using a module load command like:

                  module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Python/", "title": "Python", "text": ""}, {"location": "available_software/detail/Python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Python, load one of these modules using a module load command like:

                  module load Python/3.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Python/3.11.5-GCCcore-13.2.0 x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x Python/3.10.4-GCCcore-11.3.0-bare x x x x x x Python/3.10.4-GCCcore-11.3.0 x x x x x x Python/3.9.6-GCCcore-11.2.0-bare x x x x x x Python/3.9.6-GCCcore-11.2.0 x x x x x x Python/3.9.5-GCCcore-10.3.0-bare x x x x x x Python/3.9.5-GCCcore-10.3.0 x x x x x x Python/3.8.6-GCCcore-10.2.0 x x x x x x Python/3.8.2-GCCcore-9.3.0 x x x x x x Python/3.7.4-GCCcore-8.3.0 x x x x x x Python/3.7.2-GCCcore-8.2.0 - x - - - - Python/2.7.18-GCCcore-12.3.0 x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.3.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0-bare x x x x x x Python/2.7.18-GCCcore-11.2.0 x x x x x x Python/2.7.18-GCCcore-10.3.0-bare x x x x x x Python/2.7.18-GCCcore-10.2.0 x x x x x x Python/2.7.18-GCCcore-9.3.0 x x x x x x Python/2.7.16-GCCcore-8.3.0 x x x - x x Python/2.7.15-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/QCA/", "title": "QCA", "text": ""}, {"location": "available_software/detail/QCA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCA, load one of these modules using a module load command like:

                  module load QCA/2.3.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCA/2.3.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QCxMS/", "title": "QCxMS", "text": ""}, {"location": "available_software/detail/QCxMS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QCxMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QCxMS, load one of these modules using a module load command like:

                  module load QCxMS/5.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QCxMS/5.0.3 x x x x x x"}, {"location": "available_software/detail/QD/", "title": "QD", "text": ""}, {"location": "available_software/detail/QD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QD, load one of these modules using a module load command like:

                  module load QD/2.3.17-NVHPC-21.2-20160110\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QD/2.3.17-NVHPC-21.2-20160110 x - x - x -"}, {"location": "available_software/detail/QGIS/", "title": "QGIS", "text": ""}, {"location": "available_software/detail/QGIS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QGIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QGIS, load one of these modules using a module load command like:

                  module load QGIS/3.28.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QGIS/3.28.1-foss-2021b x x x x x x"}, {"location": "available_software/detail/QIIME2/", "title": "QIIME2", "text": ""}, {"location": "available_software/detail/QIIME2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QIIME2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QIIME2, load one of these modules using a module load command like:

                  module load QIIME2/2023.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QIIME2/2023.5.1-foss-2022a x x x x x x QIIME2/2022.11 x x x x x x QIIME2/2021.8 - - - - - x QIIME2/2020.11 - x x - x x QIIME2/2020.8 - x x - x x QIIME2/2019.7 - - - - - x"}, {"location": "available_software/detail/QScintilla/", "title": "QScintilla", "text": ""}, {"location": "available_software/detail/QScintilla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QScintilla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QScintilla, load one of these modules using a module load command like:

                  module load QScintilla/2.11.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QScintilla/2.11.6-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QUAST/", "title": "QUAST", "text": ""}, {"location": "available_software/detail/QUAST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QUAST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QUAST, load one of these modules using a module load command like:

                  module load QUAST/5.2.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QUAST/5.2.0-foss-2022a x x x x x x QUAST/5.0.2-foss-2020b-Python-2.7.18 - x x x x x QUAST/5.0.2-foss-2020b - x x x x x QUAST/5.0.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qhull/", "title": "Qhull", "text": ""}, {"location": "available_software/detail/Qhull/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qhull, load one of these modules using a module load command like:

                  module load Qhull/2020.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qhull/2020.2-GCCcore-12.3.0 x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x Qhull/2020.2-GCCcore-11.3.0 x x x x x x Qhull/2020.2-GCCcore-11.2.0 x x x x x x Qhull/2020.2-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/Qt5/", "title": "Qt5", "text": ""}, {"location": "available_software/detail/Qt5/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5, load one of these modules using a module load command like:

                  module load Qt5/5.15.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5/5.15.10-GCCcore-12.3.0 x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x Qt5/5.15.5-GCCcore-11.3.0 x x x x x x Qt5/5.15.2-GCCcore-11.2.0 x x x x x x Qt5/5.15.2-GCCcore-10.3.0 x x x x x x Qt5/5.14.2-GCCcore-10.2.0 x x x x x x Qt5/5.14.1-GCCcore-9.3.0 - x x - x x Qt5/5.13.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Qt5Webkit/", "title": "Qt5Webkit", "text": ""}, {"location": "available_software/detail/Qt5Webkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qt5Webkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qt5Webkit, load one of these modules using a module load command like:

                  module load Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qt5Webkit/5.212.0-alpha4-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtKeychain/", "title": "QtKeychain", "text": ""}, {"location": "available_software/detail/QtKeychain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtKeychain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtKeychain, load one of these modules using a module load command like:

                  module load QtKeychain/0.13.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtKeychain/0.13.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/QtPy/", "title": "QtPy", "text": ""}, {"location": "available_software/detail/QtPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QtPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QtPy, load one of these modules using a module load command like:

                  module load QtPy/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QtPy/2.3.0-GCCcore-11.3.0 x x x x x x QtPy/2.2.1-GCCcore-11.2.0 x x x - x x QtPy/1.9.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/Qtconsole/", "title": "Qtconsole", "text": ""}, {"location": "available_software/detail/Qtconsole/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qtconsole installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qtconsole, load one of these modules using a module load command like:

                  module load Qtconsole/5.4.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qtconsole/5.4.0-GCCcore-11.3.0 x x x x x x Qtconsole/5.3.2-GCCcore-11.2.0 x x x - x x Qtconsole/5.0.2-foss-2020b - x - - - - Qtconsole/5.0.2-GCCcore-10.2.0 - - x x x x"}, {"location": "available_software/detail/QuPath/", "title": "QuPath", "text": ""}, {"location": "available_software/detail/QuPath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuPath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuPath, load one of these modules using a module load command like:

                  module load QuPath/0.5.0-GCCcore-12.3.0-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuPath/0.5.0-GCCcore-12.3.0-Java-17 x x x x x x"}, {"location": "available_software/detail/Qualimap/", "title": "Qualimap", "text": ""}, {"location": "available_software/detail/Qualimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qualimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qualimap, load one of these modules using a module load command like:

                  module load Qualimap/2.2.1-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qualimap/2.2.1-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/QuantumESPRESSO/", "title": "QuantumESPRESSO", "text": ""}, {"location": "available_software/detail/QuantumESPRESSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuantumESPRESSO, load one of these modules using a module load command like:

                  module load QuantumESPRESSO/7.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuantumESPRESSO/7.0-intel-2021b x x x - x x QuantumESPRESSO/6.5-intel-2019b - x x - x x"}, {"location": "available_software/detail/QuickFF/", "title": "QuickFF", "text": ""}, {"location": "available_software/detail/QuickFF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which QuickFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using QuickFF, load one of these modules using a module load command like:

                  module load QuickFF/2.2.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty QuickFF/2.2.7-intel-2020a-Python-3.8.2 x x x x x x QuickFF/2.2.4-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Qwt/", "title": "Qwt", "text": ""}, {"location": "available_software/detail/Qwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Qwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Qwt, load one of these modules using a module load command like:

                  module load Qwt/6.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Qwt/6.2.0-GCCcore-11.2.0 x x x x x x Qwt/6.2.0-GCCcore-10.3.0 - x x - x x Qwt/6.1.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/R-INLA/", "title": "R-INLA", "text": ""}, {"location": "available_software/detail/R-INLA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-INLA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-INLA, load one of these modules using a module load command like:

                  module load R-INLA/24.01.18-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-INLA/24.01.18-foss-2023a x x x x x x R-INLA/21.05.02-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/R-bundle-Bioconductor/", "title": "R-bundle-Bioconductor", "text": ""}, {"location": "available_software/detail/R-bundle-Bioconductor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-Bioconductor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

                  module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x R-bundle-Bioconductor/3.15-foss-2022a-R-4.2.1 x x x x x x R-bundle-Bioconductor/3.15-foss-2021b-R-4.2.0 x x x x x x R-bundle-Bioconductor/3.14-foss-2021b-R-4.1.2 x x x x x x R-bundle-Bioconductor/3.13-foss-2021a-R-4.1.0 - x x - x x R-bundle-Bioconductor/3.12-foss-2020b-R-4.0.3 x x x x x x R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0 - x x - x x R-bundle-Bioconductor/3.10-foss-2019b - x x - x x"}, {"location": "available_software/detail/R-bundle-CRAN/", "title": "R-bundle-CRAN", "text": ""}, {"location": "available_software/detail/R-bundle-CRAN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R-bundle-CRAN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R-bundle-CRAN, load one of these modules using a module load command like:

                  module load R-bundle-CRAN/2023.12-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R-bundle-CRAN/2023.12-foss-2023a x x x x x x"}, {"location": "available_software/detail/R/", "title": "R", "text": ""}, {"location": "available_software/detail/R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R, load one of these modules using a module load command like:

                  module load R/4.3.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R/4.3.2-gfbf-2023a x x x x x x R/4.2.2-foss-2022b x x x x x x R/4.2.1-foss-2022a x x x x x x R/4.2.0-foss-2021b x x x x x x R/4.1.2-foss-2021b x x x x x x R/4.1.0-foss-2021a x x x x x x R/4.0.5-fosscuda-2020b - - - - x - R/4.0.5-foss-2020b - x x x x x R/4.0.4-fosscuda-2020b - - - - x - R/4.0.4-foss-2020b - x x x x x R/4.0.3-fosscuda-2020b - - - - x - R/4.0.3-foss-2020b x x x x x x R/4.0.0-foss-2020a - x x - x x R/3.6.3-foss-2020a - - x - x x R/3.6.2-foss-2019b - x x - x x"}, {"location": "available_software/detail/R2jags/", "title": "R2jags", "text": ""}, {"location": "available_software/detail/R2jags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which R2jags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using R2jags, load one of these modules using a module load command like:

                  module load R2jags/0.7-1-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty R2jags/0.7-1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/RASPA2/", "title": "RASPA2", "text": ""}, {"location": "available_software/detail/RASPA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RASPA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RASPA2, load one of these modules using a module load command like:

                  module load RASPA2/2.0.41-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RASPA2/2.0.41-foss-2020b - x x x x x"}, {"location": "available_software/detail/RAxML-NG/", "title": "RAxML-NG", "text": ""}, {"location": "available_software/detail/RAxML-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML-NG, load one of these modules using a module load command like:

                  module load RAxML-NG/1.2.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML-NG/1.2.0-GCC-12.3.0 x x x x x x RAxML-NG/1.0.3-GCC-10.2.0 - x x - x - RAxML-NG/0.9.0-gompi-2019b - x x - x x RAxML-NG/0.9.0-GCC-8.3.0 - - x - x -"}, {"location": "available_software/detail/RAxML/", "title": "RAxML", "text": ""}, {"location": "available_software/detail/RAxML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RAxML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RAxML, load one of these modules using a module load command like:

                  module load RAxML/8.2.12-iimpi-2021b-hybrid-avx2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RAxML/8.2.12-iimpi-2021b-hybrid-avx2 x x x - x x RAxML/8.2.12-iimpi-2019b-hybrid-avx2 - x x - x x"}, {"location": "available_software/detail/RDFlib/", "title": "RDFlib", "text": ""}, {"location": "available_software/detail/RDFlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDFlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDFlib, load one of these modules using a module load command like:

                  module load RDFlib/6.2.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDFlib/6.2.0-GCCcore-10.3.0 x x x - x x RDFlib/5.0.0-GCCcore-10.2.0 - x x - x x RDFlib/4.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RDKit/", "title": "RDKit", "text": ""}, {"location": "available_software/detail/RDKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDKit, load one of these modules using a module load command like:

                  module load RDKit/2022.09.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDKit/2022.09.4-foss-2022a x x x x x x RDKit/2022.03.5-foss-2021b x x x - x x RDKit/2020.09.3-foss-2019b-Python-3.7.4 - x x - x x RDKit/2020.03.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/RDP-Classifier/", "title": "RDP-Classifier", "text": ""}, {"location": "available_software/detail/RDP-Classifier/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RDP-Classifier installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RDP-Classifier, load one of these modules using a module load command like:

                  module load RDP-Classifier/2.13-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RDP-Classifier/2.13-Java-11 x x x - x x RDP-Classifier/2.12-Java-1.8 - - - - - x"}, {"location": "available_software/detail/RE2/", "title": "RE2", "text": ""}, {"location": "available_software/detail/RE2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RE2, load one of these modules using a module load command like:

                  module load RE2/2023-08-01-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RE2/2023-08-01-GCCcore-12.3.0 x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x RE2/2022-06-01-GCCcore-11.3.0 x x x x x x RE2/2022-02-01-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/RLCard/", "title": "RLCard", "text": ""}, {"location": "available_software/detail/RLCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RLCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RLCard, load one of these modules using a module load command like:

                  module load RLCard/1.0.9-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RLCard/1.0.9-foss-2022a x x x - x x"}, {"location": "available_software/detail/RMBlast/", "title": "RMBlast", "text": ""}, {"location": "available_software/detail/RMBlast/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RMBlast installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RMBlast, load one of these modules using a module load command like:

                  module load RMBlast/2.11.0-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RMBlast/2.11.0-gompi-2020b x x x x x x"}, {"location": "available_software/detail/RNA-Bloom/", "title": "RNA-Bloom", "text": ""}, {"location": "available_software/detail/RNA-Bloom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RNA-Bloom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RNA-Bloom, load one of these modules using a module load command like:

                  module load RNA-Bloom/2.0.1-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RNA-Bloom/2.0.1-GCC-12.3.0 x x x x x x RNA-Bloom/1.2.3-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/ROOT/", "title": "ROOT", "text": ""}, {"location": "available_software/detail/ROOT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ROOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ROOT, load one of these modules using a module load command like:

                  module load ROOT/6.26.06-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ROOT/6.26.06-foss-2022a x x x x x x ROOT/6.24.06-foss-2021b x x x x x x ROOT/6.20.04-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/RSEM/", "title": "RSEM", "text": ""}, {"location": "available_software/detail/RSEM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSEM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSEM, load one of these modules using a module load command like:

                  module load RSEM/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSEM/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/RSeQC/", "title": "RSeQC", "text": ""}, {"location": "available_software/detail/RSeQC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RSeQC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RSeQC, load one of these modules using a module load command like:

                  module load RSeQC/4.0.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RSeQC/4.0.0-foss-2021b x x x - x x RSeQC/4.0.0-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/RStudio-Server/", "title": "RStudio-Server", "text": ""}, {"location": "available_software/detail/RStudio-Server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RStudio-Server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RStudio-Server, load one of these modules using a module load command like:

                  module load RStudio-Server/2022.02.0-443-rhel-x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RStudio-Server/2022.02.0-443-rhel-x86_64 x x x x x - RStudio-Server/1.3.959-foss-2020a-Java-11-R-4.0.0 - - - - - x"}, {"location": "available_software/detail/RTG-Tools/", "title": "RTG-Tools", "text": ""}, {"location": "available_software/detail/RTG-Tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RTG-Tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RTG-Tools, load one of these modules using a module load command like:

                  module load RTG-Tools/3.12.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RTG-Tools/3.12.1-Java-11 x x x x x x"}, {"location": "available_software/detail/Racon/", "title": "Racon", "text": ""}, {"location": "available_software/detail/Racon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Racon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Racon, load one of these modules using a module load command like:

                  module load Racon/1.5.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Racon/1.5.0-GCCcore-12.3.0 x x x x x x Racon/1.5.0-GCCcore-11.3.0 x x x x x x Racon/1.5.0-GCCcore-11.2.0 x x x - x x Racon/1.4.21-GCCcore-10.3.0 x x x - x x Racon/1.4.21-GCCcore-10.2.0 - x x x x x Racon/1.4.13-GCCcore-9.3.0 - x x - x x Racon/1.4.13-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/RagTag/", "title": "RagTag", "text": ""}, {"location": "available_software/detail/RagTag/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RagTag installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RagTag, load one of these modules using a module load command like:

                  module load RagTag/2.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RagTag/2.0.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/Ragout/", "title": "Ragout", "text": ""}, {"location": "available_software/detail/Ragout/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ragout installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ragout, load one of these modules using a module load command like:

                  module load Ragout/2.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ragout/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/RapidJSON/", "title": "RapidJSON", "text": ""}, {"location": "available_software/detail/RapidJSON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RapidJSON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RapidJSON, load one of these modules using a module load command like:

                  module load RapidJSON/1.1.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.3.0 x x x x x x RapidJSON/1.1.0-GCCcore-11.2.0 x x x x x x RapidJSON/1.1.0-GCCcore-9.3.0 x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/Raven/", "title": "Raven", "text": ""}, {"location": "available_software/detail/Raven/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Raven installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Raven, load one of these modules using a module load command like:

                  module load Raven/1.8.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Raven/1.8.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/Ray-project/", "title": "Ray-project", "text": ""}, {"location": "available_software/detail/Ray-project/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray-project installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray-project, load one of these modules using a module load command like:

                  module load Ray-project/1.13.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray-project/1.13.0-foss-2021b x x x - x x Ray-project/1.13.0-foss-2021a x x x - x x Ray-project/0.8.4-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/Ray/", "title": "Ray", "text": ""}, {"location": "available_software/detail/Ray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ray, load one of these modules using a module load command like:

                  module load Ray/0.8.4-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ray/0.8.4-foss-2019b-Python-3.7.4 - x - - - -"}, {"location": "available_software/detail/ReFrame/", "title": "ReFrame", "text": ""}, {"location": "available_software/detail/ReFrame/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ReFrame, load one of these modules using a module load command like:

                  module load ReFrame/4.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ReFrame/4.2.0 x x x x x x ReFrame/3.11.2 - x x x x x ReFrame/3.11.1 - x x - x x ReFrame/3.9.1 - x x - x x ReFrame/3.5.2 - x x - x x"}, {"location": "available_software/detail/Redis/", "title": "Redis", "text": ""}, {"location": "available_software/detail/Redis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Redis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Redis, load one of these modules using a module load command like:

                  module load Redis/7.0.8-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Redis/7.0.8-GCC-11.3.0 x x x x x x Redis/6.2.6-GCC-11.2.0 x x x - x x Redis/6.2.6-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/RegTools/", "title": "RegTools", "text": ""}, {"location": "available_software/detail/RegTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RegTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RegTools, load one of these modules using a module load command like:

                  module load RegTools/1.0.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RegTools/1.0.0-foss-2022b x x x x x x RegTools/0.5.2-foss-2021b x x x x x x RegTools/0.5.2-foss-2020b - x x x x x RegTools/0.4.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/RepeatMasker/", "title": "RepeatMasker", "text": ""}, {"location": "available_software/detail/RepeatMasker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RepeatMasker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RepeatMasker, load one of these modules using a module load command like:

                  module load RepeatMasker/4.1.2-p1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RepeatMasker/4.1.2-p1-foss-2020b x x x x x x"}, {"location": "available_software/detail/ResistanceGA/", "title": "ResistanceGA", "text": ""}, {"location": "available_software/detail/ResistanceGA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ResistanceGA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ResistanceGA, load one of these modules using a module load command like:

                  module load ResistanceGA/4.2-5-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ResistanceGA/4.2-5-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/RevBayes/", "title": "RevBayes", "text": ""}, {"location": "available_software/detail/RevBayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RevBayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RevBayes, load one of these modules using a module load command like:

                  module load RevBayes/1.2.1-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RevBayes/1.2.1-gompi-2022a x x x x x x RevBayes/1.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Rgurobi/", "title": "Rgurobi", "text": ""}, {"location": "available_software/detail/Rgurobi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rgurobi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rgurobi, load one of these modules using a module load command like:

                  module load Rgurobi/9.5.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rgurobi/9.5.0-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/RheoTool/", "title": "RheoTool", "text": ""}, {"location": "available_software/detail/RheoTool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RheoTool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RheoTool, load one of these modules using a module load command like:

                  module load RheoTool/5.0-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RheoTool/5.0-foss-2019b x x x - x x"}, {"location": "available_software/detail/Rmath/", "title": "Rmath", "text": ""}, {"location": "available_software/detail/Rmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rmath, load one of these modules using a module load command like:

                  module load Rmath/4.3.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rmath/4.3.2-foss-2023a x x x x x x Rmath/4.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/RnBeads/", "title": "RnBeads", "text": ""}, {"location": "available_software/detail/RnBeads/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which RnBeads installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using RnBeads, load one of these modules using a module load command like:

                  module load RnBeads/2.6.0-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty RnBeads/2.6.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/Roary/", "title": "Roary", "text": ""}, {"location": "available_software/detail/Roary/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Roary installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Roary, load one of these modules using a module load command like:

                  module load Roary/3.13.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Roary/3.13.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/Ruby/", "title": "Ruby", "text": ""}, {"location": "available_software/detail/Ruby/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Ruby installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Ruby, load one of these modules using a module load command like:

                  module load Ruby/3.0.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Ruby/3.0.1-GCCcore-11.2.0 x x x x x x Ruby/3.0.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/Rust/", "title": "Rust", "text": ""}, {"location": "available_software/detail/Rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Rust, load one of these modules using a module load command like:

                  module load Rust/1.75.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Rust/1.75.0-GCCcore-12.3.0 x x x x x x Rust/1.75.0-GCCcore-12.2.0 x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x Rust/1.65.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-11.3.0 x x x x x x Rust/1.60.0-GCCcore-10.3.0 x x x - x x Rust/1.56.0-GCCcore-11.2.0 x x x - x x Rust/1.54.0-GCCcore-11.2.0 x x x x x x Rust/1.52.1-GCCcore-10.3.0 x x x x x x Rust/1.52.1-GCCcore-10.2.0 - - x - x - Rust/1.42.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SAMtools/", "title": "SAMtools", "text": ""}, {"location": "available_software/detail/SAMtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SAMtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SAMtools, load one of these modules using a module load command like:

                  module load SAMtools/1.18-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SAMtools/1.18-GCC-12.3.0 x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x SAMtools/1.16.1-GCC-11.3.0 x x x x x x SAMtools/1.15-GCC-11.2.0 x x x - x x SAMtools/1.14-GCC-11.2.0 x x x x x x SAMtools/1.13-GCC-11.3.0 x x x x x x SAMtools/1.13-GCC-10.3.0 x x x - x x SAMtools/1.11-GCC-10.2.0 x x x x x x SAMtools/1.10-iccifort-2019.5.281 - x x - x x SAMtools/1.10-GCC-9.3.0 - x x - x x SAMtools/1.10-GCC-8.3.0 - x x - x x SAMtools/0.1.20-intel-2019b - x x - x x SAMtools/0.1.20-GCC-12.3.0 x x x x x x SAMtools/0.1.20-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SBCL/", "title": "SBCL", "text": ""}, {"location": "available_software/detail/SBCL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SBCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SBCL, load one of these modules using a module load command like:

                  module load SBCL/2.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SBCL/2.2.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/SCENIC/", "title": "SCENIC", "text": ""}, {"location": "available_software/detail/SCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCENIC, load one of these modules using a module load command like:

                  module load SCENIC/1.2.4-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCENIC/1.2.4-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/SCGid/", "title": "SCGid", "text": ""}, {"location": "available_software/detail/SCGid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCGid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCGid, load one of these modules using a module load command like:

                  module load SCGid/0.9b0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCGid/0.9b0-foss-2021b x x x - x x"}, {"location": "available_software/detail/SCOTCH/", "title": "SCOTCH", "text": ""}, {"location": "available_software/detail/SCOTCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCOTCH, load one of these modules using a module load command like:

                  module load SCOTCH/7.0.3-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCOTCH/7.0.3-gompi-2023a x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x SCOTCH/7.0.1-gompi-2022a x x x x x x SCOTCH/6.1.2-iimpi-2021b x x x x x x SCOTCH/6.1.2-gompi-2021b x x x x x x SCOTCH/6.1.0-iimpi-2021a - x x - x x SCOTCH/6.1.0-iimpi-2020b - x - - - - SCOTCH/6.1.0-gompi-2021a x x x x x x SCOTCH/6.1.0-gompi-2020b x x x x x x SCOTCH/6.0.9-iimpi-2020a - x x - x x SCOTCH/6.0.9-iimpi-2019b - x x - x x SCOTCH/6.0.9-gompi-2020a - x x - x x SCOTCH/6.0.9-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SCons/", "title": "SCons", "text": ""}, {"location": "available_software/detail/SCons/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCons installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCons, load one of these modules using a module load command like:

                  module load SCons/4.5.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCons/4.5.2-GCCcore-12.3.0 x x x x x x SCons/4.4.0-GCCcore-11.3.0 - - x - x - SCons/4.2.0-GCCcore-11.2.0 x x x - x x SCons/4.1.0.post1-GCCcore-10.3.0 - x x - x x SCons/4.1.0.post1-GCCcore-10.2.0 - x x - x x SCons/3.1.2-GCCcore-9.3.0 - x x - x x SCons/3.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SCopeLoomR/", "title": "SCopeLoomR", "text": ""}, {"location": "available_software/detail/SCopeLoomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SCopeLoomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SCopeLoomR, load one of these modules using a module load command like:

                  module load SCopeLoomR/0.13.0-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SCopeLoomR/0.13.0-foss-2021b-R-4.1.2 x x x x x x"}, {"location": "available_software/detail/SDL2/", "title": "SDL2", "text": ""}, {"location": "available_software/detail/SDL2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDL2, load one of these modules using a module load command like:

                  module load SDL2/2.28.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDL2/2.28.2-GCCcore-12.3.0 x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x SDL2/2.0.20-GCCcore-11.2.0 x x x x x x SDL2/2.0.14-GCCcore-10.3.0 - x x - x x SDL2/2.0.14-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SDSL/", "title": "SDSL", "text": ""}, {"location": "available_software/detail/SDSL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SDSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SDSL, load one of these modules using a module load command like:

                  module load SDSL/2.1.1-20191211-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SDSL/2.1.1-20191211-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SEACells/", "title": "SEACells", "text": ""}, {"location": "available_software/detail/SEACells/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEACells installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEACells, load one of these modules using a module load command like:

                  module load SEACells/20230731-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEACells/20230731-foss-2021a x x x x x x"}, {"location": "available_software/detail/SECAPR/", "title": "SECAPR", "text": ""}, {"location": "available_software/detail/SECAPR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SECAPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SECAPR, load one of these modules using a module load command like:

                  module load SECAPR/1.1.15-foss-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SECAPR/1.1.15-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/SELFIES/", "title": "SELFIES", "text": ""}, {"location": "available_software/detail/SELFIES/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SELFIES installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SELFIES, load one of these modules using a module load command like:

                  module load SELFIES/2.1.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SELFIES/2.1.1-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/SEPP/", "title": "SEPP", "text": ""}, {"location": "available_software/detail/SEPP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SEPP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SEPP, load one of these modules using a module load command like:

                  module load SEPP/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SEPP/4.5.1-foss-2022a x x x x x x SEPP/4.5.1-foss-2021b x x x - x x SEPP/4.4.0-foss-2020b - x x x x x SEPP/4.3.10-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SHAP/", "title": "SHAP", "text": ""}, {"location": "available_software/detail/SHAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SHAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SHAP, load one of these modules using a module load command like:

                  module load SHAP/0.42.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SHAP/0.42.1-foss-2019b-Python-3.7.4 x x x - x x SHAP/0.41.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SISSO%2B%2B/", "title": "SISSO++", "text": ""}, {"location": "available_software/detail/SISSO%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO++, load one of these modules using a module load command like:

                  module load SISSO++/1.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO++/1.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/SISSO/", "title": "SISSO", "text": ""}, {"location": "available_software/detail/SISSO/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SISSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SISSO, load one of these modules using a module load command like:

                  module load SISSO/3.1-20220324-iimpi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SISSO/3.1-20220324-iimpi-2021b x x x - x x SISSO/3.0.2-iimpi-2021b x x x - x x"}, {"location": "available_software/detail/SKESA/", "title": "SKESA", "text": ""}, {"location": "available_software/detail/SKESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SKESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SKESA, load one of these modules using a module load command like:

                  module load SKESA/2.4.0-gompi-2021b_saute.1.3.0_1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SKESA/2.4.0-gompi-2021b_saute.1.3.0_1 x x x - x x"}, {"location": "available_software/detail/SLATEC/", "title": "SLATEC", "text": ""}, {"location": "available_software/detail/SLATEC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLATEC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLATEC, load one of these modules using a module load command like:

                  module load SLATEC/4.1-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLATEC/4.1-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/SLEPc/", "title": "SLEPc", "text": ""}, {"location": "available_software/detail/SLEPc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLEPc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLEPc, load one of these modules using a module load command like:

                  module load SLEPc/3.18.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLEPc/3.18.2-intel-2021b x x x x x x SLEPc/3.17.2-foss-2022a x x x x x x SLEPc/3.15.1-foss-2021a - x x - x x SLEPc/3.12.2-intel-2019b-Python-3.7.4 - - x - x - SLEPc/3.12.2-intel-2019b-Python-2.7.16 - x x - x x SLEPc/3.12.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SLiM/", "title": "SLiM", "text": ""}, {"location": "available_software/detail/SLiM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SLiM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SLiM, load one of these modules using a module load command like:

                  module load SLiM/3.4-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SLiM/3.4-GCC-9.3.0 - x x - x -"}, {"location": "available_software/detail/SMAP/", "title": "SMAP", "text": ""}, {"location": "available_software/detail/SMAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMAP, load one of these modules using a module load command like:

                  module load SMAP/4.6.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMAP/4.6.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/SMC%2B%2B/", "title": "SMC++", "text": ""}, {"location": "available_software/detail/SMC%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMC++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMC++, load one of these modules using a module load command like:

                  module load SMC++/1.15.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMC++/1.15.4-foss-2022a x x x - x x"}, {"location": "available_software/detail/SMV/", "title": "SMV", "text": ""}, {"location": "available_software/detail/SMV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SMV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SMV, load one of these modules using a module load command like:

                  module load SMV/6.7.17-iccifort-2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SMV/6.7.17-iccifort-2020.4.304 - x x - x x"}, {"location": "available_software/detail/SNAP-ESA-python/", "title": "SNAP-ESA-python", "text": ""}, {"location": "available_software/detail/SNAP-ESA-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA-python, load one of these modules using a module load command like:

                  module load SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-11-Python-2.7.18 x x x x x - SNAP-ESA-python/9.0.0-GCCcore-11.2.0-Java-1.8-Python-2.7.18 x x x x - x"}, {"location": "available_software/detail/SNAP-ESA/", "title": "SNAP-ESA", "text": ""}, {"location": "available_software/detail/SNAP-ESA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP-ESA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP-ESA, load one of these modules using a module load command like:

                  module load SNAP-ESA/9.0.0-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP-ESA/9.0.0-Java-11 x x x x x x SNAP-ESA/9.0.0-Java-1.8 x x x x - x"}, {"location": "available_software/detail/SNAP/", "title": "SNAP", "text": ""}, {"location": "available_software/detail/SNAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SNAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SNAP, load one of these modules using a module load command like:

                  module load SNAP/2.0.1-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SNAP/2.0.1-GCC-12.2.0 x x x x x x SNAP/2.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/SOAPdenovo-Trans/", "title": "SOAPdenovo-Trans", "text": ""}, {"location": "available_software/detail/SOAPdenovo-Trans/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SOAPdenovo-Trans installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SOAPdenovo-Trans, load one of these modules using a module load command like:

                  module load SOAPdenovo-Trans/1.0.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SOAPdenovo-Trans/1.0.5-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/SPAdes/", "title": "SPAdes", "text": ""}, {"location": "available_software/detail/SPAdes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPAdes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPAdes, load one of these modules using a module load command like:

                  module load SPAdes/3.15.5-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPAdes/3.15.5-GCC-11.3.0 x x x x x x SPAdes/3.15.4-GCC-12.3.0 x x x x x x SPAdes/3.15.4-GCC-12.2.0 x x x x x x SPAdes/3.15.3-GCC-11.2.0 x x x - x x SPAdes/3.15.2-GCC-10.2.0-Python-2.7.18 - x x x x x SPAdes/3.15.2-GCC-10.2.0 - x x x x x SPAdes/3.14.1-GCC-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SPM/", "title": "SPM", "text": ""}, {"location": "available_software/detail/SPM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPM, load one of these modules using a module load command like:

                  module load SPM/12.5_r7771-MATLAB-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPM/12.5_r7771-MATLAB-2021b x x x - x x"}, {"location": "available_software/detail/SPOTPY/", "title": "SPOTPY", "text": ""}, {"location": "available_software/detail/SPOTPY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SPOTPY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SPOTPY, load one of these modules using a module load command like:

                  module load SPOTPY/1.5.14-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SPOTPY/1.5.14-intel-2021b x x x - x x"}, {"location": "available_software/detail/SQLite/", "title": "SQLite", "text": ""}, {"location": "available_software/detail/SQLite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SQLite, load one of these modules using a module load command like:

                  module load SQLite/3.43.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SQLite/3.43.1-GCCcore-13.2.0 x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x SQLite/3.38.3-GCCcore-11.3.0 x x x x x x SQLite/3.36-GCCcore-11.2.0 x x x x x x SQLite/3.35.4-GCCcore-10.3.0 x x x x x x SQLite/3.33.0-GCCcore-10.2.0 x x x x x x SQLite/3.31.1-GCCcore-9.3.0 x x x x x x SQLite/3.29.0-GCCcore-8.3.0 x x x x x x SQLite/3.27.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/SRA-Toolkit/", "title": "SRA-Toolkit", "text": ""}, {"location": "available_software/detail/SRA-Toolkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRA-Toolkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRA-Toolkit, load one of these modules using a module load command like:

                  module load SRA-Toolkit/3.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRA-Toolkit/3.0.3-gompi-2022a x x x x x x SRA-Toolkit/3.0.0-gompi-2021b x x x x x x SRA-Toolkit/3.0.0-centos_linux64 x x x - x x SRA-Toolkit/2.10.9-gompi-2020b - x x - x x SRA-Toolkit/2.10.8-gompi-2020a - x x - x x SRA-Toolkit/2.10.4-gompi-2019b - x x - x x SRA-Toolkit/2.9.6-1-centos_linux64 - x x - x x"}, {"location": "available_software/detail/SRPRISM/", "title": "SRPRISM", "text": ""}, {"location": "available_software/detail/SRPRISM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRPRISM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRPRISM, load one of these modules using a module load command like:

                  module load SRPRISM/3.1.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRPRISM/3.1.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/SRST2/", "title": "SRST2", "text": ""}, {"location": "available_software/detail/SRST2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SRST2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SRST2, load one of these modules using a module load command like:

                  module load SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SRST2/0.2.0-20210620-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SSPACE_Basic/", "title": "SSPACE_Basic", "text": ""}, {"location": "available_software/detail/SSPACE_Basic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSPACE_Basic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSPACE_Basic, load one of these modules using a module load command like:

                  module load SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSPACE_Basic/2.1.1-GCC-10.2.0-Python-2.7.18 - x x - x -"}, {"location": "available_software/detail/SSW/", "title": "SSW", "text": ""}, {"location": "available_software/detail/SSW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SSW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SSW, load one of these modules using a module load command like:

                  module load SSW/1.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SSW/1.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/STACEY/", "title": "STACEY", "text": ""}, {"location": "available_software/detail/STACEY/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STACEY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STACEY, load one of these modules using a module load command like:

                  module load STACEY/1.2.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STACEY/1.2.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/STAR/", "title": "STAR", "text": ""}, {"location": "available_software/detail/STAR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STAR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STAR, load one of these modules using a module load command like:

                  module load STAR/2.7.11a-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STAR/2.7.11a-GCC-12.3.0 x x x x x x STAR/2.7.10b-GCC-11.3.0 x x x x x x STAR/2.7.9a-GCC-11.2.0 x x x x x x STAR/2.7.6a-GCC-10.2.0 - x x x x x STAR/2.7.4a-GCC-9.3.0 - x x - x - STAR/2.7.3a-GCC-8.3.0 - x x - x - STAR/2.7.2b-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/STREAM/", "title": "STREAM", "text": ""}, {"location": "available_software/detail/STREAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STREAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STREAM, load one of these modules using a module load command like:

                  module load STREAM/5.10-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STREAM/5.10-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/STRique/", "title": "STRique", "text": ""}, {"location": "available_software/detail/STRique/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which STRique installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using STRique, load one of these modules using a module load command like:

                  module load STRique/0.4.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty STRique/0.4.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/SUNDIALS/", "title": "SUNDIALS", "text": ""}, {"location": "available_software/detail/SUNDIALS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUNDIALS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUNDIALS, load one of these modules using a module load command like:

                  module load SUNDIALS/6.6.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUNDIALS/6.6.0-foss-2023a x x x x x x SUNDIALS/6.2.0-intel-2021b x x x - x x SUNDIALS/5.7.0-intel-2020b - x x x x x SUNDIALS/5.7.0-fosscuda-2020b - - - - x - SUNDIALS/5.7.0-foss-2020b - x x x x x SUNDIALS/5.1.0-intel-2019b - x x - x x SUNDIALS/5.1.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/SUPPA/", "title": "SUPPA", "text": ""}, {"location": "available_software/detail/SUPPA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SUPPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SUPPA, load one of these modules using a module load command like:

                  module load SUPPA/2.3-20231005-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SUPPA/2.3-20231005-foss-2022b x x x x x x"}, {"location": "available_software/detail/SVIM/", "title": "SVIM", "text": ""}, {"location": "available_software/detail/SVIM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SVIM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SVIM, load one of these modules using a module load command like:

                  module load SVIM/2.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SVIM/2.0.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/SWIG/", "title": "SWIG", "text": ""}, {"location": "available_software/detail/SWIG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SWIG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SWIG, load one of these modules using a module load command like:

                  module load SWIG/4.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SWIG/4.1.1-GCCcore-12.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.3.0 x x x x x x SWIG/4.0.2-GCCcore-11.2.0 x x x x x x SWIG/4.0.2-GCCcore-10.3.0 x x x x x x SWIG/4.0.2-GCCcore-10.2.0 x x x x x x SWIG/4.0.1-GCCcore-9.3.0 x x x x x x SWIG/4.0.1-GCCcore-8.3.0 - x x - x x SWIG/3.0.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Sabre/", "title": "Sabre", "text": ""}, {"location": "available_software/detail/Sabre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sabre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sabre, load one of these modules using a module load command like:

                  module load Sabre/2013-09-28-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sabre/2013-09-28-GCC-12.2.0 x x x x x x"}, {"location": "available_software/detail/Sailfish/", "title": "Sailfish", "text": ""}, {"location": "available_software/detail/Sailfish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sailfish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sailfish, load one of these modules using a module load command like:

                  module load Sailfish/0.10.1-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sailfish/0.10.1-gompi-2019b - x - - - x"}, {"location": "available_software/detail/Salmon/", "title": "Salmon", "text": ""}, {"location": "available_software/detail/Salmon/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Salmon installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Salmon, load one of these modules using a module load command like:

                  module load Salmon/1.9.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Salmon/1.9.0-GCC-11.3.0 x x x x x x Salmon/1.4.0-gompi-2020b - x x x x x Salmon/1.1.0-gompi-2019b - x x - x x"}, {"location": "available_software/detail/Sambamba/", "title": "Sambamba", "text": ""}, {"location": "available_software/detail/Sambamba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sambamba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sambamba, load one of these modules using a module load command like:

                  module load Sambamba/1.0.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sambamba/1.0.1-GCC-11.3.0 x x x x x x Sambamba/0.8.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/Satsuma2/", "title": "Satsuma2", "text": ""}, {"location": "available_software/detail/Satsuma2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Satsuma2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Satsuma2, load one of these modules using a module load command like:

                  module load Satsuma2/20220304-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Satsuma2/20220304-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/ScaFaCoS/", "title": "ScaFaCoS", "text": ""}, {"location": "available_software/detail/ScaFaCoS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaFaCoS, load one of these modules using a module load command like:

                  module load ScaFaCoS/1.0.1-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaFaCoS/1.0.1-intel-2020a - x x - x x ScaFaCoS/1.0.1-foss-2021b x x x - x x ScaFaCoS/1.0.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/ScaLAPACK/", "title": "ScaLAPACK", "text": ""}, {"location": "available_software/detail/ScaLAPACK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ScaLAPACK, load one of these modules using a module load command like:

                  module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x ScaLAPACK/2.2.0-gompi-2022a-fb x x x x x x ScaLAPACK/2.1.0-iimpi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompic-2020b x - - - x - ScaLAPACK/2.1.0-gompi-2021b-fb x x x x x x ScaLAPACK/2.1.0-gompi-2021a-fb x x x x x x ScaLAPACK/2.1.0-gompi-2020b-bf - x - - - - ScaLAPACK/2.1.0-gompi-2020b x x x x x x ScaLAPACK/2.1.0-gompi-2020a - x x - x x ScaLAPACK/2.0.2-gompi-2019b x x x - x x"}, {"location": "available_software/detail/SciPy-bundle/", "title": "SciPy-bundle", "text": ""}, {"location": "available_software/detail/SciPy-bundle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SciPy-bundle, load one of these modules using a module load command like:

                  module load SciPy-bundle/2023.11-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SciPy-bundle/2023.11-gfbf-2023b x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x SciPy-bundle/2022.05-intel-2022a x x x x x x SciPy-bundle/2022.05-foss-2022a x x x x x x SciPy-bundle/2021.10-intel-2021b x x x x x x SciPy-bundle/2021.10-foss-2021b-Python-2.7.18 x x x x x x SciPy-bundle/2021.10-foss-2021b x x x x x x SciPy-bundle/2021.05-intel-2021a - x x - x x SciPy-bundle/2021.05-gomkl-2021a x x x x x x SciPy-bundle/2021.05-foss-2021a x x x x x x SciPy-bundle/2020.11-intelcuda-2020b - - - - x - SciPy-bundle/2020.11-intel-2020b - x x - x x SciPy-bundle/2020.11-fosscuda-2020b x - - - x - SciPy-bundle/2020.11-foss-2020b-Python-2.7.18 - x x x x x SciPy-bundle/2020.11-foss-2020b x x x x x x SciPy-bundle/2020.03-iomkl-2020a-Python-3.8.2 - x - - - - SciPy-bundle/2020.03-intel-2020a-Python-3.8.2 x x x x x x SciPy-bundle/2020.03-intel-2020a-Python-2.7.18 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-3.8.2 - x x - x x SciPy-bundle/2020.03-foss-2020a-Python-2.7.18 - - x - x x SciPy-bundle/2019.10-intel-2019b-Python-3.7.4 - x x - x x SciPy-bundle/2019.10-intel-2019b-Python-2.7.16 - x x - x x SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 x x x - x x SciPy-bundle/2019.10-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Seaborn/", "title": "Seaborn", "text": ""}, {"location": "available_software/detail/Seaborn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seaborn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seaborn, load one of these modules using a module load command like:

                  module load Seaborn/0.13.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seaborn/0.13.2-gfbf-2023a x x x x x x Seaborn/0.12.2-foss-2022b x x x x x x Seaborn/0.12.1-foss-2022a x x x x x x Seaborn/0.11.2-foss-2021b x x x x x x Seaborn/0.11.2-foss-2021a x x x x x x Seaborn/0.11.1-intel-2020b - x x - x x Seaborn/0.11.1-fosscuda-2020b x - - - x - Seaborn/0.11.1-foss-2020b - x x x x x Seaborn/0.10.1-intel-2020b - x x - x x Seaborn/0.10.1-intel-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.1-foss-2020a-Python-3.8.2 - x x - x x Seaborn/0.10.0-intel-2019b-Python-3.7.4 - x x - x x Seaborn/0.10.0-foss-2019b-Python-3.7.4 - x x - x x Seaborn/0.9.1-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/SemiBin/", "title": "SemiBin", "text": ""}, {"location": "available_software/detail/SemiBin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SemiBin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SemiBin, load one of these modules using a module load command like:

                  module load SemiBin/2.0.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SemiBin/2.0.2-foss-2022a-CUDA-11.7.0 x - x - x - SemiBin/2.0.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Sentence-Transformers/", "title": "Sentence-Transformers", "text": ""}, {"location": "available_software/detail/Sentence-Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sentence-Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sentence-Transformers, load one of these modules using a module load command like:

                  module load Sentence-Transformers/2.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sentence-Transformers/2.2.2-foss-2022b x x x x x x"}, {"location": "available_software/detail/SentencePiece/", "title": "SentencePiece", "text": ""}, {"location": "available_software/detail/SentencePiece/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SentencePiece installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SentencePiece, load one of these modules using a module load command like:

                  module load SentencePiece/0.1.99-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SentencePiece/0.1.99-GCC-12.2.0 x x x x x x SentencePiece/0.1.97-GCC-11.3.0 x x x x x x SentencePiece/0.1.96-GCC-10.3.0 x x x - x x SentencePiece/0.1.85-GCC-8.3.0-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/SeqAn/", "title": "SeqAn", "text": ""}, {"location": "available_software/detail/SeqAn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqAn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqAn, load one of these modules using a module load command like:

                  module load SeqAn/2.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqAn/2.4.0-GCCcore-11.2.0 x x x - x x SeqAn/2.4.0-GCCcore-10.2.0 - x x x x x SeqAn/2.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/SeqKit/", "title": "SeqKit", "text": ""}, {"location": "available_software/detail/SeqKit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqKit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqKit, load one of these modules using a module load command like:

                  module load SeqKit/2.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqKit/2.1.0 - x x - x x"}, {"location": "available_software/detail/SeqLib/", "title": "SeqLib", "text": ""}, {"location": "available_software/detail/SeqLib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeqLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeqLib, load one of these modules using a module load command like:

                  module load SeqLib/1.2.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeqLib/1.2.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/Serf/", "title": "Serf", "text": ""}, {"location": "available_software/detail/Serf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Serf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Serf, load one of these modules using a module load command like:

                  module load Serf/1.3.9-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Serf/1.3.9-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/Seurat/", "title": "Seurat", "text": ""}, {"location": "available_software/detail/Seurat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Seurat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Seurat, load one of these modules using a module load command like:

                  module load Seurat/4.3.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Seurat/4.3.0-foss-2022a-R-4.2.1 x x x x x x Seurat/4.3.0-foss-2021b-R-4.1.2 x x x - x x Seurat/4.2.0-foss-2022a-R-4.2.1 x x x - x x Seurat/4.0.1-foss-2020b-R-4.0.3 - x x x x x Seurat/3.1.5-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/SeuratData/", "title": "SeuratData", "text": ""}, {"location": "available_software/detail/SeuratData/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratData installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratData, load one of these modules using a module load command like:

                  module load SeuratData/20210514-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratData/20210514-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/SeuratDisk/", "title": "SeuratDisk", "text": ""}, {"location": "available_software/detail/SeuratDisk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratDisk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratDisk, load one of these modules using a module load command like:

                  module load SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratDisk/0.0.0.9020-foss-2022a-R-4.2.1 x x x - x x"}, {"location": "available_software/detail/SeuratWrappers/", "title": "SeuratWrappers", "text": ""}, {"location": "available_software/detail/SeuratWrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SeuratWrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SeuratWrappers, load one of these modules using a module load command like:

                  module load SeuratWrappers/20210528-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SeuratWrappers/20210528-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/Shapely/", "title": "Shapely", "text": ""}, {"location": "available_software/detail/Shapely/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shapely installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shapely, load one of these modules using a module load command like:

                  module load Shapely/2.0.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shapely/2.0.1-gfbf-2023a x x x x x x Shapely/2.0.1-foss-2022b x x x x x x Shapely/1.8a1-iccifort-2020.4.304 - x x x x x Shapely/1.8a1-GCC-10.3.0 x - - - x - Shapely/1.8a1-GCC-10.2.0 - x x x x x Shapely/1.8.2-foss-2022a x x x x x x Shapely/1.8.2-foss-2021b x x x x x x Shapely/1.8.1.post1-GCC-11.2.0 x x x - x x Shapely/1.7.1-GCC-9.3.0-Python-3.8.2 - x x - x x Shapely/1.7.0-iccifort-2019.5.281-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Shasta/", "title": "Shasta", "text": ""}, {"location": "available_software/detail/Shasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Shasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Shasta, load one of these modules using a module load command like:

                  module load Shasta/0.8.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Shasta/0.8.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Short-Pair/", "title": "Short-Pair", "text": ""}, {"location": "available_software/detail/Short-Pair/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Short-Pair installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Short-Pair, load one of these modules using a module load command like:

                  module load Short-Pair/20170125-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Short-Pair/20170125-foss-2021b x x x - x x"}, {"location": "available_software/detail/SiNVICT/", "title": "SiNVICT", "text": ""}, {"location": "available_software/detail/SiNVICT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SiNVICT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SiNVICT, load one of these modules using a module load command like:

                  module load SiNVICT/1.0-20180817-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SiNVICT/1.0-20180817-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/Sibelia/", "title": "Sibelia", "text": ""}, {"location": "available_software/detail/Sibelia/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sibelia installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sibelia, load one of these modules using a module load command like:

                  module load Sibelia/3.0.7-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sibelia/3.0.7-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimNIBS/", "title": "SimNIBS", "text": ""}, {"location": "available_software/detail/SimNIBS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimNIBS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimNIBS, load one of these modules using a module load command like:

                  module load SimNIBS/3.2.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimNIBS/3.2.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/SimPEG/", "title": "SimPEG", "text": ""}, {"location": "available_software/detail/SimPEG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimPEG, load one of these modules using a module load command like:

                  module load SimPEG/0.18.1-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimPEG/0.18.1-intel-2021b x x x - x x SimPEG/0.18.1-foss-2021b x x x - x x SimPEG/0.14.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SimpleElastix/", "title": "SimpleElastix", "text": ""}, {"location": "available_software/detail/SimpleElastix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleElastix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleElastix, load one of these modules using a module load command like:

                  module load SimpleElastix/1.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleElastix/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SimpleITK/", "title": "SimpleITK", "text": ""}, {"location": "available_software/detail/SimpleITK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SimpleITK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SimpleITK, load one of these modules using a module load command like:

                  module load SimpleITK/2.1.1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SimpleITK/2.1.1.2-foss-2022a x x x x x x SimpleITK/2.1.0-fosscuda-2020b x - - - x - SimpleITK/2.1.0-foss-2020b - x x x x x SimpleITK/1.2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/SlamDunk/", "title": "SlamDunk", "text": ""}, {"location": "available_software/detail/SlamDunk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SlamDunk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SlamDunk, load one of these modules using a module load command like:

                  module load SlamDunk/0.4.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SlamDunk/0.4.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/Sniffles/", "title": "Sniffles", "text": ""}, {"location": "available_software/detail/Sniffles/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Sniffles installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Sniffles, load one of these modules using a module load command like:

                  module load Sniffles/2.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Sniffles/2.0.7-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/SoX/", "title": "SoX", "text": ""}, {"location": "available_software/detail/SoX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SoX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SoX, load one of these modules using a module load command like:

                  module load SoX/14.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SoX/14.4.2-GCCcore-11.3.0 x x x x x x SoX/14.4.2-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Spark/", "title": "Spark", "text": ""}, {"location": "available_software/detail/Spark/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spark installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spark, load one of these modules using a module load command like:

                  module load Spark/3.5.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spark/3.5.0-foss-2023a x x x x x x Spark/3.2.1-foss-2021b x x x - x x Spark/3.1.1-fosscuda-2020b - - - - x - Spark/2.4.5-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/SpatialDE/", "title": "SpatialDE", "text": ""}, {"location": "available_software/detail/SpatialDE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SpatialDE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SpatialDE, load one of these modules using a module load command like:

                  module load SpatialDE/1.1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SpatialDE/1.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/Spyder/", "title": "Spyder", "text": ""}, {"location": "available_software/detail/Spyder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Spyder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Spyder, load one of these modules using a module load command like:

                  module load Spyder/4.1.5-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Spyder/4.1.5-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/SqueezeMeta/", "title": "SqueezeMeta", "text": ""}, {"location": "available_software/detail/SqueezeMeta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SqueezeMeta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SqueezeMeta, load one of these modules using a module load command like:

                  module load SqueezeMeta/1.5.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SqueezeMeta/1.5.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/Squidpy/", "title": "Squidpy", "text": ""}, {"location": "available_software/detail/Squidpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Squidpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Squidpy, load one of these modules using a module load command like:

                  module load Squidpy/1.2.2-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Squidpy/1.2.2-foss-2021b x x x - x x"}, {"location": "available_software/detail/Stacks/", "title": "Stacks", "text": ""}, {"location": "available_software/detail/Stacks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stacks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stacks, load one of these modules using a module load command like:

                  module load Stacks/2.53-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stacks/2.53-iccifort-2019.5.281 - x x - x - Stacks/2.5-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/Stata/", "title": "Stata", "text": ""}, {"location": "available_software/detail/Stata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Stata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Stata, load one of these modules using a module load command like:

                  module load Stata/15\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Stata/15 - x x x x x"}, {"location": "available_software/detail/Statistics-R/", "title": "Statistics-R", "text": ""}, {"location": "available_software/detail/Statistics-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Statistics-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Statistics-R, load one of these modules using a module load command like:

                  module load Statistics-R/0.34-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Statistics-R/0.34-foss-2020a - x x - x x"}, {"location": "available_software/detail/StringTie/", "title": "StringTie", "text": ""}, {"location": "available_software/detail/StringTie/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which StringTie installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using StringTie, load one of these modules using a module load command like:

                  module load StringTie/2.2.1-GCC-11.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty StringTie/2.2.1-GCC-11.2.0-Python-2.7.18 x x x x x x StringTie/2.2.1-GCC-11.2.0 x x x x x x StringTie/2.1.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/Structure/", "title": "Structure", "text": ""}, {"location": "available_software/detail/Structure/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure, load one of these modules using a module load command like:

                  module load Structure/2.3.4-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure/2.3.4-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/Structure_threader/", "title": "Structure_threader", "text": ""}, {"location": "available_software/detail/Structure_threader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Structure_threader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Structure_threader, load one of these modules using a module load command like:

                  module load Structure_threader/1.3.10-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Structure_threader/1.3.10-foss-2022b x x x x x x"}, {"location": "available_software/detail/SuAVE-biomat/", "title": "SuAVE-biomat", "text": ""}, {"location": "available_software/detail/SuAVE-biomat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuAVE-biomat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuAVE-biomat, load one of these modules using a module load command like:

                  module load SuAVE-biomat/2.0.0-20230815-intel-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuAVE-biomat/2.0.0-20230815-intel-2023a x x x x x x"}, {"location": "available_software/detail/Subread/", "title": "Subread", "text": ""}, {"location": "available_software/detail/Subread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subread, load one of these modules using a module load command like:

                  module load Subread/2.0.3-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subread/2.0.3-GCC-9.3.0 - x x - x - Subread/2.0.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/Subversion/", "title": "Subversion", "text": ""}, {"location": "available_software/detail/Subversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Subversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Subversion, load one of these modules using a module load command like:

                  module load Subversion/1.14.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Subversion/1.14.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/SuiteSparse/", "title": "SuiteSparse", "text": ""}, {"location": "available_software/detail/SuiteSparse/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuiteSparse installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuiteSparse, load one of these modules using a module load command like:

                  module load SuiteSparse/7.1.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuiteSparse/7.1.0-foss-2023a x x x x x x SuiteSparse/5.13.0-foss-2022b-METIS-5.1.0 x x x x x x SuiteSparse/5.13.0-foss-2022a-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-intel-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021b-METIS-5.1.0 x x x x x x SuiteSparse/5.10.1-foss-2021a-METIS-5.1.0 x x x x x x SuiteSparse/5.8.1-foss-2020b-METIS-5.1.0 x x x x x x SuiteSparse/5.7.1-intel-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.7.1-foss-2020a-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-intel-2019b-METIS-5.1.0 - x x - x x SuiteSparse/5.6.0-foss-2019b-METIS-5.1.0 x x x - x x"}, {"location": "available_software/detail/SuperLU/", "title": "SuperLU", "text": ""}, {"location": "available_software/detail/SuperLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU, load one of these modules using a module load command like:

                  module load SuperLU/5.2.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU/5.2.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/SuperLU_DIST/", "title": "SuperLU_DIST", "text": ""}, {"location": "available_software/detail/SuperLU_DIST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which SuperLU_DIST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using SuperLU_DIST, load one of these modules using a module load command like:

                  module load SuperLU_DIST/8.1.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty SuperLU_DIST/8.1.0-foss-2022a x - - x - - SuperLU_DIST/5.4.0-intel-2020a-trisolve-merge - x x - x x"}, {"location": "available_software/detail/Szip/", "title": "Szip", "text": ""}, {"location": "available_software/detail/Szip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Szip, load one of these modules using a module load command like:

                  module load Szip/2.1.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Szip/2.1.1-GCCcore-12.3.0 x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x Szip/2.1.1-GCCcore-11.3.0 x x x x x x Szip/2.1.1-GCCcore-11.2.0 x x x x x x Szip/2.1.1-GCCcore-10.3.0 x x x x x x Szip/2.1.1-GCCcore-10.2.0 x x x x x x Szip/2.1.1-GCCcore-9.3.0 x x x x x x Szip/2.1.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/TALON/", "title": "TALON", "text": ""}, {"location": "available_software/detail/TALON/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TALON installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TALON, load one of these modules using a module load command like:

                  module load TALON/5.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TALON/5.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/TAMkin/", "title": "TAMkin", "text": ""}, {"location": "available_software/detail/TAMkin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TAMkin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TAMkin, load one of these modules using a module load command like:

                  module load TAMkin/1.2.6-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TAMkin/1.2.6-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/TCLAP/", "title": "TCLAP", "text": ""}, {"location": "available_software/detail/TCLAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TCLAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TCLAP, load one of these modules using a module load command like:

                  module load TCLAP/1.2.4-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TCLAP/1.2.4-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/TELEMAC-MASCARET/", "title": "TELEMAC-MASCARET", "text": ""}, {"location": "available_software/detail/TELEMAC-MASCARET/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TELEMAC-MASCARET installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TELEMAC-MASCARET, load one of these modules using a module load command like:

                  module load TELEMAC-MASCARET/8p3r1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TELEMAC-MASCARET/8p3r1-foss-2021b x x x - x x"}, {"location": "available_software/detail/TEtranscripts/", "title": "TEtranscripts", "text": ""}, {"location": "available_software/detail/TEtranscripts/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TEtranscripts installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TEtranscripts, load one of these modules using a module load command like:

                  module load TEtranscripts/2.2.0-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TEtranscripts/2.2.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/TOBIAS/", "title": "TOBIAS", "text": ""}, {"location": "available_software/detail/TOBIAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOBIAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOBIAS, load one of these modules using a module load command like:

                  module load TOBIAS/0.12.12-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOBIAS/0.12.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/TOPAS/", "title": "TOPAS", "text": ""}, {"location": "available_software/detail/TOPAS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TOPAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TOPAS, load one of these modules using a module load command like:

                  module load TOPAS/3.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TOPAS/3.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/TRF/", "title": "TRF", "text": ""}, {"location": "available_software/detail/TRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRF, load one of these modules using a module load command like:

                  module load TRF/4.09.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRF/4.09.1-GCCcore-11.3.0 x x x x x x TRF/4.09.1-GCCcore-11.2.0 x x x - x x TRF/4.09.1-GCCcore-10.2.0 x x x x x x TRF/4.09-linux64 - - - - - x"}, {"location": "available_software/detail/TRUST4/", "title": "TRUST4", "text": ""}, {"location": "available_software/detail/TRUST4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TRUST4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TRUST4, load one of these modules using a module load command like:

                  module load TRUST4/1.0.6-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TRUST4/1.0.6-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Tcl/", "title": "Tcl", "text": ""}, {"location": "available_software/detail/Tcl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tcl, load one of these modules using a module load command like:

                  module load Tcl/8.6.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tcl/8.6.13-GCCcore-13.2.0 x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x Tcl/8.6.12-GCCcore-11.3.0 x x x x x x Tcl/8.6.11-GCCcore-11.2.0 x x x x x x Tcl/8.6.11-GCCcore-10.3.0 x x x x x x Tcl/8.6.10-GCCcore-10.2.0 x x x x x x Tcl/8.6.10-GCCcore-9.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.3.0 x x x x x x Tcl/8.6.9-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/TensorFlow/", "title": "TensorFlow", "text": ""}, {"location": "available_software/detail/TensorFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TensorFlow, load one of these modules using a module load command like:

                  module load TensorFlow/2.13.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TensorFlow/2.13.0-foss-2023a x x x x x x TensorFlow/2.13.0-foss-2022b x x x x x x TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0 x - x - x - TensorFlow/2.11.0-foss-2022a x x x x x x TensorFlow/2.8.4-foss-2021b - - - x - - TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1 x - - - x - TensorFlow/2.7.1-foss-2021b x x x x x x TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1 x - - - x - TensorFlow/2.6.0-foss-2021a x x x x x x TensorFlow/2.5.3-foss-2021a x x x - x x TensorFlow/2.5.0-fosscuda-2020b x - - - x - TensorFlow/2.5.0-foss-2020b - x x x x x TensorFlow/2.4.1-fosscuda-2020b x - - - x - TensorFlow/2.4.1-foss-2020b x x x x x x TensorFlow/2.3.1-foss-2020a-Python-3.8.2 - x x - x x TensorFlow/2.2.3-foss-2020b - x x x x x TensorFlow/2.2.2-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.2.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/2.1.0-foss-2019b-Python-3.7.4 - x x - x x TensorFlow/1.15.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Theano/", "title": "Theano", "text": ""}, {"location": "available_software/detail/Theano/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Theano installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Theano, load one of these modules using a module load command like:

                  module load Theano/1.1.2-intel-2021b-PyMC\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Theano/1.1.2-intel-2021b-PyMC x x x - x x Theano/1.1.2-intel-2020b-PyMC - - x - x x Theano/1.1.2-fosscuda-2020b-PyMC x - - - x - Theano/1.1.2-foss-2020b-PyMC - x x x x x Theano/1.0.4-intel-2019b-Python-3.7.4 - - x - x x Theano/1.0.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Tk/", "title": "Tk", "text": ""}, {"location": "available_software/detail/Tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tk, load one of these modules using a module load command like:

                  module load Tk/8.6.13-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tk/8.6.13-GCCcore-12.3.0 x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x Tk/8.6.12-GCCcore-11.3.0 x x x x x x Tk/8.6.11-GCCcore-11.2.0 x x x x x x Tk/8.6.11-GCCcore-10.3.0 x x x x x x Tk/8.6.10-GCCcore-10.2.0 x x x x x x Tk/8.6.10-GCCcore-9.3.0 x x x x x x Tk/8.6.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Tkinter/", "title": "Tkinter", "text": ""}, {"location": "available_software/detail/Tkinter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tkinter, load one of these modules using a module load command like:

                  module load Tkinter/3.11.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x Tkinter/3.10.4-GCCcore-11.3.0 x x x x x x Tkinter/3.9.6-GCCcore-11.2.0 x x x x x x Tkinter/3.9.5-GCCcore-10.3.0 x x x x x x Tkinter/3.8.6-GCCcore-10.2.0 x x x x x x Tkinter/3.8.2-GCCcore-9.3.0 x x x x x x Tkinter/3.7.4-GCCcore-8.3.0 - x x - x x Tkinter/2.7.18-GCCcore-10.2.0 - x x x x x Tkinter/2.7.18-GCCcore-9.3.0 - x x - x x Tkinter/2.7.16-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Togl/", "title": "Togl", "text": ""}, {"location": "available_software/detail/Togl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Togl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Togl, load one of these modules using a module load command like:

                  module load Togl/2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Togl/2.0-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/Tombo/", "title": "Tombo", "text": ""}, {"location": "available_software/detail/Tombo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Tombo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Tombo, load one of these modules using a module load command like:

                  module load Tombo/1.5.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Tombo/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/TopHat/", "title": "TopHat", "text": ""}, {"location": "available_software/detail/TopHat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TopHat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TopHat, load one of these modules using a module load command like:

                  module load TopHat/2.1.2-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TopHat/2.1.2-iimpi-2020a - x x - x x TopHat/2.1.2-gompi-2020a - x x - x x TopHat/2.1.2-GCC-11.3.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-11.2.0-Python-2.7.18 x x x x x x TopHat/2.1.2-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/TransDecoder/", "title": "TransDecoder", "text": ""}, {"location": "available_software/detail/TransDecoder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TransDecoder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TransDecoder, load one of these modules using a module load command like:

                  module load TransDecoder/5.5.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TransDecoder/5.5.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/TranscriptClean/", "title": "TranscriptClean", "text": ""}, {"location": "available_software/detail/TranscriptClean/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TranscriptClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TranscriptClean, load one of these modules using a module load command like:

                  module load TranscriptClean/2.0.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TranscriptClean/2.0.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/Transformers/", "title": "Transformers", "text": ""}, {"location": "available_software/detail/Transformers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Transformers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Transformers, load one of these modules using a module load command like:

                  module load Transformers/4.30.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Transformers/4.30.2-foss-2022b x x x x x x Transformers/4.24.0-foss-2022a x x x x x x Transformers/4.21.1-foss-2021b x x x - x x Transformers/4.20.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/TreeMix/", "title": "TreeMix", "text": ""}, {"location": "available_software/detail/TreeMix/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TreeMix installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TreeMix, load one of these modules using a module load command like:

                  module load TreeMix/1.13-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TreeMix/1.13-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/Trilinos/", "title": "Trilinos", "text": ""}, {"location": "available_software/detail/Trilinos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trilinos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trilinos, load one of these modules using a module load command like:

                  module load Trilinos/12.12.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trilinos/12.12.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/Trim_Galore/", "title": "Trim_Galore", "text": ""}, {"location": "available_software/detail/Trim_Galore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trim_Galore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trim_Galore, load one of these modules using a module load command like:

                  module load Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trim_Galore/0.6.6-GCC-10.2.0-Python-2.7.18 - x x x x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-3.7.4 - x x - x x Trim_Galore/0.6.5-GCCcore-8.3.0-Java-11-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/Trimmomatic/", "title": "Trimmomatic", "text": ""}, {"location": "available_software/detail/Trimmomatic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trimmomatic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trimmomatic, load one of these modules using a module load command like:

                  module load Trimmomatic/0.39-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trimmomatic/0.39-Java-11 x x x x x x Trimmomatic/0.38-Java-1.8 - - - - - x"}, {"location": "available_software/detail/Trinity/", "title": "Trinity", "text": ""}, {"location": "available_software/detail/Trinity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trinity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trinity, load one of these modules using a module load command like:

                  module load Trinity/2.15.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trinity/2.15.1-foss-2022a x x x x x x Trinity/2.10.0-foss-2019b-Python-3.7.4 - x x - x x Trinity/2.9.1-foss-2019b-Python-2.7.16 - x x - x x Trinity/2.8.5-GCC-8.3.0-Java-11 - x x - x x"}, {"location": "available_software/detail/Triton/", "title": "Triton", "text": ""}, {"location": "available_software/detail/Triton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Triton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Triton, load one of these modules using a module load command like:

                  module load Triton/1.1.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Triton/1.1.1-foss-2022a-CUDA-11.7.0 - - x - - -"}, {"location": "available_software/detail/Trycycler/", "title": "Trycycler", "text": ""}, {"location": "available_software/detail/Trycycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Trycycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Trycycler, load one of these modules using a module load command like:

                  module load Trycycler/0.3.3-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Trycycler/0.3.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/TurboVNC/", "title": "TurboVNC", "text": ""}, {"location": "available_software/detail/TurboVNC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which TurboVNC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using TurboVNC, load one of these modules using a module load command like:

                  module load TurboVNC/2.2.6-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty TurboVNC/2.2.6-GCCcore-11.2.0 x x x x x x TurboVNC/2.2.3-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/UCC/", "title": "UCC", "text": ""}, {"location": "available_software/detail/UCC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCC, load one of these modules using a module load command like:

                  module load UCC/1.2.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCC/1.2.0-GCCcore-13.2.0 x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x UCC/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/UCLUST/", "title": "UCLUST", "text": ""}, {"location": "available_software/detail/UCLUST/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCLUST installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCLUST, load one of these modules using a module load command like:

                  module load UCLUST/1.2.22q-i86linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCLUST/1.2.22q-i86linux64 - x x - x x"}, {"location": "available_software/detail/UCX-CUDA/", "title": "UCX-CUDA", "text": ""}, {"location": "available_software/detail/UCX-CUDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX-CUDA, load one of these modules using a module load command like:

                  module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x - x - x - UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0 x - x - x - UCX-CUDA/1.11.2-GCCcore-11.2.0-CUDA-11.4.1 x - - - x - UCX-CUDA/1.10.0-GCCcore-10.3.0-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/UCX/", "title": "UCX", "text": ""}, {"location": "available_software/detail/UCX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UCX, load one of these modules using a module load command like:

                  module load UCX/1.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UCX/1.15.0-GCCcore-13.2.0 x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x UCX/1.12.1-GCCcore-11.3.0 x x x x x x UCX/1.11.2-GCCcore-11.2.0 x x x x x x UCX/1.10.0-GCCcore-10.3.0 x x x x x x UCX/1.9.0-GCCcore-10.2.0-CUDA-11.2.1 x - x - x - UCX/1.9.0-GCCcore-10.2.0-CUDA-11.1.1 x x x x x x UCX/1.9.0-GCCcore-10.2.0 x x x x x x UCX/1.8.0-GCCcore-9.3.0 x x x x x x UCX/1.6.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UDUNITS/", "title": "UDUNITS", "text": ""}, {"location": "available_software/detail/UDUNITS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UDUNITS, load one of these modules using a module load command like:

                  module load UDUNITS/2.2.28-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.3.0 x x x x x x UDUNITS/2.2.28-GCCcore-11.2.0 x x x x x x UDUNITS/2.2.28-GCCcore-10.3.0 x x x x x x UDUNITS/2.2.26-foss-2020a - x x - x x UDUNITS/2.2.26-GCCcore-10.2.0 x x x x x x UDUNITS/2.2.26-GCCcore-9.3.0 - x x - x x UDUNITS/2.2.26-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/UFL/", "title": "UFL", "text": ""}, {"location": "available_software/detail/UFL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UFL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UFL, load one of these modules using a module load command like:

                  module load UFL/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UFL/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UMI-tools/", "title": "UMI-tools", "text": ""}, {"location": "available_software/detail/UMI-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UMI-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UMI-tools, load one of these modules using a module load command like:

                  module load UMI-tools/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UMI-tools/1.0.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/UQTk/", "title": "UQTk", "text": ""}, {"location": "available_software/detail/UQTk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UQTk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UQTk, load one of these modules using a module load command like:

                  module load UQTk/3.1.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UQTk/3.1.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/USEARCH/", "title": "USEARCH", "text": ""}, {"location": "available_software/detail/USEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which USEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using USEARCH, load one of these modules using a module load command like:

                  module load USEARCH/11.0.667-i86linux32\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty USEARCH/11.0.667-i86linux32 x x x x x x"}, {"location": "available_software/detail/UnZip/", "title": "UnZip", "text": ""}, {"location": "available_software/detail/UnZip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UnZip, load one of these modules using a module load command like:

                  module load UnZip/6.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UnZip/6.0-GCCcore-13.2.0 x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x UnZip/6.0-GCCcore-11.3.0 x x x x x x UnZip/6.0-GCCcore-11.2.0 x x x x x x UnZip/6.0-GCCcore-10.3.0 x x x x x x UnZip/6.0-GCCcore-10.2.0 x x x x x x UnZip/6.0-GCCcore-9.3.0 x x x x x x"}, {"location": "available_software/detail/UniFrac/", "title": "UniFrac", "text": ""}, {"location": "available_software/detail/UniFrac/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which UniFrac installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using UniFrac, load one of these modules using a module load command like:

                  module load UniFrac/1.3.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty UniFrac/1.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Unicycler/", "title": "Unicycler", "text": ""}, {"location": "available_software/detail/Unicycler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unicycler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unicycler, load one of these modules using a module load command like:

                  module load Unicycler/0.4.8-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unicycler/0.4.8-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/Unidecode/", "title": "Unidecode", "text": ""}, {"location": "available_software/detail/Unidecode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Unidecode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Unidecode, load one of these modules using a module load command like:

                  module load Unidecode/1.3.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Unidecode/1.3.6-GCCcore-11.3.0 x x x x x x Unidecode/1.1.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VASP/", "title": "VASP", "text": ""}, {"location": "available_software/detail/VASP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VASP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VASP, load one of these modules using a module load command like:

                  module load VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VASP/6.4.2-gomkl-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-gomkl-2023a x x x x x x VASP/6.4.2-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.2-gomkl-2021a - x x x x x VASP/6.4.2-foss-2023a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 x x x x x x VASP/6.4.2-foss-2023a x x x x x x VASP/6.4.2-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.4.1-gomkl-2021a-VASPsol-20210413-vtst-197-Wannier90-3.1.0 - x x x x x VASP/6.4.1-gomkl-2021a - x x x x x VASP/6.4.1-NVHPC-21.2-CUDA-11.2.1 x - x - x - VASP/6.3.1-gomkl-2021a-VASPsol-20210413-vtst-184-Wannier90-3.1.0 x x x x x x VASP/6.3.1-gomkl-2021a - x x x x x VASP/6.3.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.3.0-gomkl-2021a-VASPsol-20210413 - x x x x x VASP/6.2.1-gomkl-2021a - x x x x x VASP/6.2.1-NVHPC-21.2-CUDA-11.2.1 x - - - x - VASP/6.2.0-intel-2020a - x x - x x VASP/6.2.0-gomkl-2020a - x x x x x VASP/6.2.0-foss-2020a - x x - x x VASP/6.1.2-intel-2020a - x x - x x VASP/6.1.2-gomkl-2020a - x x x x x VASP/6.1.2-foss-2020a - x x - x x VASP/5.4.4-iomkl-2020b-vtst-176-mt-20180516 x x x x x x VASP/5.4.4-intel-2019b-mt-20180516-ncl - x x - x x VASP/5.4.4-intel-2019b-mt-20180516 - x x - x x"}, {"location": "available_software/detail/VBZ-Compression/", "title": "VBZ-Compression", "text": ""}, {"location": "available_software/detail/VBZ-Compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VBZ-Compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VBZ-Compression, load one of these modules using a module load command like:

                  module load VBZ-Compression/1.0.3-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VBZ-Compression/1.0.3-gompi-2022a x x x x x x VBZ-Compression/1.0.1-gompi-2020b - - x x x x"}, {"location": "available_software/detail/VCFtools/", "title": "VCFtools", "text": ""}, {"location": "available_software/detail/VCFtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VCFtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VCFtools, load one of these modules using a module load command like:

                  module load VCFtools/0.1.16-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VCFtools/0.1.16-iccifort-2019.5.281 - x x - x x VCFtools/0.1.16-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/VEP/", "title": "VEP", "text": ""}, {"location": "available_software/detail/VEP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VEP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VEP, load one of these modules using a module load command like:

                  module load VEP/107-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VEP/107-GCC-11.3.0 x x x - x x VEP/105-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/VESTA/", "title": "VESTA", "text": ""}, {"location": "available_software/detail/VESTA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VESTA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VESTA, load one of these modules using a module load command like:

                  module load VESTA/3.5.8-gtk3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VESTA/3.5.8-gtk3 x x x - x x"}, {"location": "available_software/detail/VMD/", "title": "VMD", "text": ""}, {"location": "available_software/detail/VMD/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMD, load one of these modules using a module load command like:

                  module load VMD/1.9.4a51-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMD/1.9.4a51-foss-2020b - x x x x x"}, {"location": "available_software/detail/VMTK/", "title": "VMTK", "text": ""}, {"location": "available_software/detail/VMTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VMTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VMTK, load one of these modules using a module load command like:

                  module load VMTK/1.4.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VMTK/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/VSCode/", "title": "VSCode", "text": ""}, {"location": "available_software/detail/VSCode/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSCode installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSCode, load one of these modules using a module load command like:

                  module load VSCode/1.85.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSCode/1.85.0 x x x x x x"}, {"location": "available_software/detail/VSEARCH/", "title": "VSEARCH", "text": ""}, {"location": "available_software/detail/VSEARCH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VSEARCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VSEARCH, load one of these modules using a module load command like:

                  module load VSEARCH/2.22.1-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VSEARCH/2.22.1-GCC-11.3.0 x x x x x x VSEARCH/2.18.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/VTK/", "title": "VTK", "text": ""}, {"location": "available_software/detail/VTK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTK, load one of these modules using a module load command like:

                  module load VTK/9.2.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTK/9.2.2-foss-2022a x x x x x x VTK/9.2.0.rc2-foss-2022a x x x - x x VTK/9.1.0-foss-2021b x x x - x x VTK/9.0.1-fosscuda-2020b x - - - x - VTK/9.0.1-foss-2021a - x x - x x VTK/9.0.1-foss-2020b - x x x x x VTK/8.2.0-foss-2020a-Python-3.8.2 - x x - x x VTK/8.2.0-foss-2019b-Python-3.7.4 - x x - x x VTK/8.2.0-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/VTune/", "title": "VTune", "text": ""}, {"location": "available_software/detail/VTune/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VTune installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VTune, load one of these modules using a module load command like:

                  module load VTune/2019_update2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VTune/2019_update2 - - - - - x"}, {"location": "available_software/detail/Vala/", "title": "Vala", "text": ""}, {"location": "available_software/detail/Vala/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Vala installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Vala, load one of these modules using a module load command like:

                  module load Vala/0.52.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Vala/0.52.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/Valgrind/", "title": "Valgrind", "text": ""}, {"location": "available_software/detail/Valgrind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Valgrind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Valgrind, load one of these modules using a module load command like:

                  module load Valgrind/3.20.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Valgrind/3.20.0-gompi-2022a x x x - x x Valgrind/3.19.0-gompi-2022a x x x - x x Valgrind/3.18.1-iimpi-2021b x x x - x x Valgrind/3.18.1-gompi-2021b x x x - x x Valgrind/3.17.0-gompi-2021a x x x - x x"}, {"location": "available_software/detail/VarScan/", "title": "VarScan", "text": ""}, {"location": "available_software/detail/VarScan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VarScan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VarScan, load one of these modules using a module load command like:

                  module load VarScan/2.4.4-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VarScan/2.4.4-Java-11 x x x - x x"}, {"location": "available_software/detail/Velvet/", "title": "Velvet", "text": ""}, {"location": "available_software/detail/Velvet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Velvet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Velvet, load one of these modules using a module load command like:

                  module load Velvet/1.2.10-foss-2023a-mt-kmer_191\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Velvet/1.2.10-foss-2023a-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-11.2.0-mt-kmer_191 x x x x x x Velvet/1.2.10-GCC-8.3.0-mt-kmer_191 - x x - x x"}, {"location": "available_software/detail/VirSorter2/", "title": "VirSorter2", "text": ""}, {"location": "available_software/detail/VirSorter2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VirSorter2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VirSorter2, load one of these modules using a module load command like:

                  module load VirSorter2/2.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VirSorter2/2.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/VisPy/", "title": "VisPy", "text": ""}, {"location": "available_software/detail/VisPy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which VisPy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using VisPy, load one of these modules using a module load command like:

                  module load VisPy/0.12.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty VisPy/0.12.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/Voro%2B%2B/", "title": "Voro++", "text": ""}, {"location": "available_software/detail/Voro%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Voro++, load one of these modules using a module load command like:

                  module load Voro++/0.4.6-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Voro++/0.4.6-intel-2019b - x x - x x Voro++/0.4.6-foss-2019b - x x - x x Voro++/0.4.6-GCCcore-11.2.0 x x x - x x Voro++/0.4.6-GCCcore-10.3.0 - x x - x x Voro++/0.4.6-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/WFA2/", "title": "WFA2", "text": ""}, {"location": "available_software/detail/WFA2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WFA2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WFA2, load one of these modules using a module load command like:

                  module load WFA2/2.3.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WFA2/2.3.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/WHAM/", "title": "WHAM", "text": ""}, {"location": "available_software/detail/WHAM/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WHAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WHAM, load one of these modules using a module load command like:

                  module load WHAM/2.0.10.2-intel-2020a-kj_mol\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WHAM/2.0.10.2-intel-2020a-kj_mol - x x - x x WHAM/2.0.10.2-intel-2020a - x x - x x"}, {"location": "available_software/detail/WIEN2k/", "title": "WIEN2k", "text": ""}, {"location": "available_software/detail/WIEN2k/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WIEN2k installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WIEN2k, load one of these modules using a module load command like:

                  module load WIEN2k/21.1-intel-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WIEN2k/21.1-intel-2021a - x x - x x WIEN2k/19.2-intel-2020b - x x x x x"}, {"location": "available_software/detail/WPS/", "title": "WPS", "text": ""}, {"location": "available_software/detail/WPS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WPS, load one of these modules using a module load command like:

                  module load WPS/4.1-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WPS/4.1-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/WRF/", "title": "WRF", "text": ""}, {"location": "available_software/detail/WRF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WRF, load one of these modules using a module load command like:

                  module load WRF/4.1.3-intel-2019b-dmpar\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WRF/4.1.3-intel-2019b-dmpar - x x - x x WRF/3.9.1.1-intel-2020b-dmpar - x x x x x WRF/3.8.0-intel-2019b-dmpar - x x - x x"}, {"location": "available_software/detail/Wannier90/", "title": "Wannier90", "text": ""}, {"location": "available_software/detail/Wannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wannier90, load one of these modules using a module load command like:

                  module load Wannier90/3.1.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wannier90/3.1.0-intel-2022a - - x - x x Wannier90/3.1.0-intel-2020b - x x x x x Wannier90/3.1.0-intel-2020a - x x - x x Wannier90/3.1.0-gomkl-2023a x x x x x x Wannier90/3.1.0-gomkl-2021a x x x x x x Wannier90/3.1.0-foss-2023a x x x x x x Wannier90/3.1.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/Wayland/", "title": "Wayland", "text": ""}, {"location": "available_software/detail/Wayland/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Wayland, load one of these modules using a module load command like:

                  module load Wayland/1.22.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Wayland/1.22.0-GCCcore-12.3.0 x x x x x x Wayland/1.21.0-GCCcore-11.2.0 x x x x x x Wayland/1.20.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/Waylandpp/", "title": "Waylandpp", "text": ""}, {"location": "available_software/detail/Waylandpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Waylandpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Waylandpp, load one of these modules using a module load command like:

                  module load Waylandpp/1.0.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Waylandpp/1.0.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/WebKitGTK%2B/", "title": "WebKitGTK+", "text": ""}, {"location": "available_software/detail/WebKitGTK%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WebKitGTK+ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WebKitGTK+, load one of these modules using a module load command like:

                  module load WebKitGTK+/2.37.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WebKitGTK+/2.37.1-GCC-11.2.0 x x x x x x WebKitGTK+/2.27.4-GCC-10.3.0 x x x - x x"}, {"location": "available_software/detail/WhatsHap/", "title": "WhatsHap", "text": ""}, {"location": "available_software/detail/WhatsHap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WhatsHap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WhatsHap, load one of these modules using a module load command like:

                  module load WhatsHap/1.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WhatsHap/1.7-foss-2022a x x x x x x WhatsHap/1.4-foss-2021b x x x - x x WhatsHap/1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/Winnowmap/", "title": "Winnowmap", "text": ""}, {"location": "available_software/detail/Winnowmap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Winnowmap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Winnowmap, load one of these modules using a module load command like:

                  module load Winnowmap/1.0-GCC-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Winnowmap/1.0-GCC-8.3.0 - x - - - x"}, {"location": "available_software/detail/WisecondorX/", "title": "WisecondorX", "text": ""}, {"location": "available_software/detail/WisecondorX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which WisecondorX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using WisecondorX, load one of these modules using a module load command like:

                  module load WisecondorX/1.1.6-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty WisecondorX/1.1.6-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/X11/", "title": "X11", "text": ""}, {"location": "available_software/detail/X11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using X11, load one of these modules using a module load command like:

                  module load X11/20230603-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty X11/20230603-GCCcore-12.3.0 x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x X11/20220504-GCCcore-11.3.0 x x x x x x X11/20210802-GCCcore-11.2.0 x x x x x x X11/20210518-GCCcore-10.3.0 x x x x x x X11/20201008-GCCcore-10.2.0 x x x x x x X11/20200222-GCCcore-9.3.0 x x x x x x X11/20190717-GCCcore-8.3.0 x x x - x x X11/20190311-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/XCFun/", "title": "XCFun", "text": ""}, {"location": "available_software/detail/XCFun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCFun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCFun, load one of these modules using a module load command like:

                  module load XCFun/2.1.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCFun/2.1.1-GCCcore-12.2.0 x x x x x x XCFun/2.1.1-GCCcore-11.3.0 - x x x x x XCFun/2.1.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/XCrySDen/", "title": "XCrySDen", "text": ""}, {"location": "available_software/detail/XCrySDen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XCrySDen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XCrySDen, load one of these modules using a module load command like:

                  module load XCrySDen/1.6.2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XCrySDen/1.6.2-intel-2022a x x x - x x XCrySDen/1.6.2-foss-2022a x x x - x x"}, {"location": "available_software/detail/XGBoost/", "title": "XGBoost", "text": ""}, {"location": "available_software/detail/XGBoost/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XGBoost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XGBoost, load one of these modules using a module load command like:

                  module load XGBoost/1.7.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XGBoost/1.7.2-foss-2022a-CUDA-11.7.0 x - - - - - XGBoost/1.7.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/XML-Compile/", "title": "XML-Compile", "text": ""}, {"location": "available_software/detail/XML-Compile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-Compile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-Compile, load one of these modules using a module load command like:

                  module load XML-Compile/1.63-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-Compile/1.63-GCCcore-12.2.0 x x x x x x XML-Compile/1.63-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/XML-LibXML/", "title": "XML-LibXML", "text": ""}, {"location": "available_software/detail/XML-LibXML/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XML-LibXML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XML-LibXML, load one of these modules using a module load command like:

                  module load XML-LibXML/2.0208-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.3.0 x x x x x x XML-LibXML/2.0207-GCCcore-11.2.0 x x x x x x XML-LibXML/2.0206-GCCcore-10.2.0 - x x x x x XML-LibXML/2.0205-GCCcore-9.3.0 - x x - x x XML-LibXML/2.0201-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/XZ/", "title": "XZ", "text": ""}, {"location": "available_software/detail/XZ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XZ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XZ, load one of these modules using a module load command like:

                  module load XZ/5.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XZ/5.4.4-GCCcore-13.2.0 x x x x x x XZ/5.4.2-GCCcore-12.3.0 x x x x x x XZ/5.2.7-GCCcore-12.2.0 x x x x x x XZ/5.2.5-GCCcore-11.3.0 x x x x x x XZ/5.2.5-GCCcore-11.2.0 x x x x x x XZ/5.2.5-GCCcore-10.3.0 x x x x x x XZ/5.2.5-GCCcore-10.2.0 x x x x x x XZ/5.2.5-GCCcore-9.3.0 x x x x x x XZ/5.2.4-GCCcore-8.3.0 x x x x x x XZ/5.2.4-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/Xerces-C%2B%2B/", "title": "Xerces-C++", "text": ""}, {"location": "available_software/detail/Xerces-C%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xerces-C++, load one of these modules using a module load command like:

                  module load Xerces-C++/3.2.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/XlsxWriter/", "title": "XlsxWriter", "text": ""}, {"location": "available_software/detail/XlsxWriter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which XlsxWriter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using XlsxWriter, load one of these modules using a module load command like:

                  module load XlsxWriter/3.1.9-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty XlsxWriter/3.1.9-GCCcore-13.2.0 x x x x x x XlsxWriter/3.1.3-GCCcore-12.3.0 x x x x x x XlsxWriter/3.1.2-GCCcore-12.2.0 x x x x x x XlsxWriter/3.0.8-GCCcore-11.3.0 x x x x x x XlsxWriter/3.0.2-GCCcore-11.2.0 x x x x x x XlsxWriter/1.4.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/Xvfb/", "title": "Xvfb", "text": ""}, {"location": "available_software/detail/Xvfb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Xvfb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Xvfb, load one of these modules using a module load command like:

                  module load Xvfb/21.1.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x Xvfb/21.1.3-GCCcore-11.3.0 x x x x x x Xvfb/1.20.13-GCCcore-11.2.0 x x x x x x Xvfb/1.20.11-GCCcore-10.3.0 x x x x x x Xvfb/1.20.9-GCCcore-10.2.0 x x x x x x Xvfb/1.20.9-GCCcore-9.3.0 - x x - x x Xvfb/1.20.8-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/YACS/", "title": "YACS", "text": ""}, {"location": "available_software/detail/YACS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YACS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YACS, load one of these modules using a module load command like:

                  module load YACS/0.1.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YACS/0.1.8-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/YANK/", "title": "YANK", "text": ""}, {"location": "available_software/detail/YANK/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YANK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YANK, load one of these modules using a module load command like:

                  module load YANK/0.25.2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YANK/0.25.2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/YAXT/", "title": "YAXT", "text": ""}, {"location": "available_software/detail/YAXT/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which YAXT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using YAXT, load one of these modules using a module load command like:

                  module load YAXT/0.9.1-gompi-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty YAXT/0.9.1-gompi-2021a x x x - x x YAXT/0.6.2-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/Yambo/", "title": "Yambo", "text": ""}, {"location": "available_software/detail/Yambo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yambo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yambo, load one of these modules using a module load command like:

                  module load Yambo/5.1.2-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yambo/5.1.2-intel-2021b x x x x x x"}, {"location": "available_software/detail/Yasm/", "title": "Yasm", "text": ""}, {"location": "available_software/detail/Yasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Yasm, load one of these modules using a module load command like:

                  module load Yasm/1.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Yasm/1.3.0-GCCcore-12.3.0 x x x x x x Yasm/1.3.0-GCCcore-12.2.0 x x x x x x Yasm/1.3.0-GCCcore-11.3.0 x x x x x x Yasm/1.3.0-GCCcore-11.2.0 x x x x x x Yasm/1.3.0-GCCcore-10.3.0 x x x x x x Yasm/1.3.0-GCCcore-10.2.0 x x x x x x Yasm/1.3.0-GCCcore-9.3.0 - x x - x x Yasm/1.3.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/Z3/", "title": "Z3", "text": ""}, {"location": "available_software/detail/Z3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Z3, load one of these modules using a module load command like:

                  module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x Z3/4.10.2-GCCcore-11.3.0 x x x x x x Z3/4.8.12-GCCcore-11.2.0 x x x x x x Z3/4.8.11-GCCcore-10.3.0 x x x x x x Z3/4.8.10-GCCcore-10.2.0 - x x x x x Z3/4.8.9-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/Zeo%2B%2B/", "title": "Zeo++", "text": ""}, {"location": "available_software/detail/Zeo%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zeo++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zeo++, load one of these modules using a module load command like:

                  module load Zeo++/0.3-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zeo++/0.3-intel-compilers-2023.1.0 x x x x x x"}, {"location": "available_software/detail/ZeroMQ/", "title": "ZeroMQ", "text": ""}, {"location": "available_software/detail/ZeroMQ/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ZeroMQ, load one of these modules using a module load command like:

                  module load ZeroMQ/4.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-12.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.3.0 x x x x x x ZeroMQ/4.3.4-GCCcore-11.2.0 x x x x x x ZeroMQ/4.3.4-GCCcore-10.3.0 x x x x x x ZeroMQ/4.3.3-GCCcore-10.2.0 x x x x x x ZeroMQ/4.3.2-GCCcore-9.3.0 x x x x x x ZeroMQ/4.3.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zip/", "title": "Zip", "text": ""}, {"location": "available_software/detail/Zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zip, load one of these modules using a module load command like:

                  module load Zip/3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zip/3.0-GCCcore-12.3.0 x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x Zip/3.0-GCCcore-11.3.0 x x x x x x Zip/3.0-GCCcore-11.2.0 x x x x x x Zip/3.0-GCCcore-10.3.0 x x x x x x Zip/3.0-GCCcore-10.2.0 x x x x x x Zip/3.0-GCCcore-9.3.0 - x x - x x Zip/3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/Zopfli/", "title": "Zopfli", "text": ""}, {"location": "available_software/detail/Zopfli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which Zopfli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using Zopfli, load one of these modules using a module load command like:

                  module load Zopfli/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty Zopfli/1.0.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/adjustText/", "title": "adjustText", "text": ""}, {"location": "available_software/detail/adjustText/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which adjustText installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using adjustText, load one of these modules using a module load command like:

                  module load adjustText/0.7.3-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty adjustText/0.7.3-foss-2021b x x x - x x"}, {"location": "available_software/detail/aiohttp/", "title": "aiohttp", "text": ""}, {"location": "available_software/detail/aiohttp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aiohttp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aiohttp, load one of these modules using a module load command like:

                  module load aiohttp/3.8.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aiohttp/3.8.5-GCCcore-12.3.0 x x x x - x aiohttp/3.8.5-GCCcore-12.2.0 x x x x x x aiohttp/3.8.3-GCCcore-11.3.0 x x x x x x aiohttp/3.8.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/alevin-fry/", "title": "alevin-fry", "text": ""}, {"location": "available_software/detail/alevin-fry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alevin-fry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alevin-fry, load one of these modules using a module load command like:

                  module load alevin-fry/0.4.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alevin-fry/0.4.3-GCCcore-11.2.0 - x - - - -"}, {"location": "available_software/detail/alleleCount/", "title": "alleleCount", "text": ""}, {"location": "available_software/detail/alleleCount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleCount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleCount, load one of these modules using a module load command like:

                  module load alleleCount/4.3.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleCount/4.3.0-GCC-12.2.0 x x x x x x alleleCount/4.2.1-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/alleleIntegrator/", "title": "alleleIntegrator", "text": ""}, {"location": "available_software/detail/alleleIntegrator/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alleleIntegrator installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alleleIntegrator, load one of these modules using a module load command like:

                  module load alleleIntegrator/0.8.8-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alleleIntegrator/0.8.8-foss-2022b-R-4.2.2 x x x x x x alleleIntegrator/0.8.8-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/alsa-lib/", "title": "alsa-lib", "text": ""}, {"location": "available_software/detail/alsa-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which alsa-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using alsa-lib, load one of these modules using a module load command like:

                  module load alsa-lib/1.2.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty alsa-lib/1.2.8-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/anadama2/", "title": "anadama2", "text": ""}, {"location": "available_software/detail/anadama2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anadama2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anadama2, load one of these modules using a module load command like:

                  module load anadama2/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anadama2/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/angsd/", "title": "angsd", "text": ""}, {"location": "available_software/detail/angsd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which angsd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using angsd, load one of these modules using a module load command like:

                  module load angsd/0.940-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty angsd/0.940-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/anndata/", "title": "anndata", "text": ""}, {"location": "available_software/detail/anndata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anndata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anndata, load one of these modules using a module load command like:

                  module load anndata/0.10.5.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anndata/0.10.5.post1-foss-2023a x x x x x x anndata/0.9.2-foss-2021a x x x x x x anndata/0.8.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/ant/", "title": "ant", "text": ""}, {"location": "available_software/detail/ant/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ant installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ant, load one of these modules using a module load command like:

                  module load ant/1.10.12-Java-17\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ant/1.10.12-Java-17 x x x x x x ant/1.10.12-Java-11 x x x x x x ant/1.10.11-Java-11 x x x - x x ant/1.10.9-Java-11 x x x x x x ant/1.10.8-Java-11 - x x - x x ant/1.10.7-Java-11 - x x - x x ant/1.10.6-Java-1.8 - x x - x x"}, {"location": "available_software/detail/antiSMASH/", "title": "antiSMASH", "text": ""}, {"location": "available_software/detail/antiSMASH/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which antiSMASH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using antiSMASH, load one of these modules using a module load command like:

                  module load antiSMASH/6.0.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty antiSMASH/6.0.1-foss-2020b - x x x x x antiSMASH/5.2.0-foss-2020b - x x x x x antiSMASH/5.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/anvio/", "title": "anvio", "text": ""}, {"location": "available_software/detail/anvio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which anvio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using anvio, load one of these modules using a module load command like:

                  module load anvio/8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty anvio/8-foss-2022b x x x x x x anvio/6.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/any2fasta/", "title": "any2fasta", "text": ""}, {"location": "available_software/detail/any2fasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which any2fasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using any2fasta, load one of these modules using a module load command like:

                  module load any2fasta/0.4.2-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty any2fasta/0.4.2-GCCcore-10.2.0 - x x - x x any2fasta/0.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/apex/", "title": "apex", "text": ""}, {"location": "available_software/detail/apex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which apex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using apex, load one of these modules using a module load command like:

                  module load apex/20210420-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty apex/20210420-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/archspec/", "title": "archspec", "text": ""}, {"location": "available_software/detail/archspec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using archspec, load one of these modules using a module load command like:

                  module load archspec/0.1.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty archspec/0.1.3-GCCcore-11.2.0 x x x - x x archspec/0.1.2-GCCcore-10.3.0 - x x - x x archspec/0.1.0-GCCcore-9.3.0-Python-3.8.2 - x x - x x archspec/0.1.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/argtable/", "title": "argtable", "text": ""}, {"location": "available_software/detail/argtable/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which argtable installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using argtable, load one of these modules using a module load command like:

                  module load argtable/2.13-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty argtable/2.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/aria2/", "title": "aria2", "text": ""}, {"location": "available_software/detail/aria2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which aria2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using aria2, load one of these modules using a module load command like:

                  module load aria2/1.35.0-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty aria2/1.35.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/arpack-ng/", "title": "arpack-ng", "text": ""}, {"location": "available_software/detail/arpack-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arpack-ng, load one of these modules using a module load command like:

                  module load arpack-ng/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arpack-ng/3.9.0-foss-2023a x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x arpack-ng/3.8.0-foss-2022a x x x x x x arpack-ng/3.8.0-foss-2021b x x x x x x arpack-ng/3.8.0-foss-2021a x x x x x x arpack-ng/3.7.0-intel-2020a - x x - x x arpack-ng/3.7.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/arrow-R/", "title": "arrow-R", "text": ""}, {"location": "available_software/detail/arrow-R/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow-R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow-R, load one of these modules using a module load command like:

                  module load arrow-R/14.0.0.2-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow-R/14.0.0.2-foss-2023a-R-4.3.2 x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x arrow-R/8.0.0-foss-2022a-R-4.2.1 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.2.0 x x x x x x arrow-R/6.0.0.2-foss-2021b-R-4.1.2 x x x x x x arrow-R/6.0.0.2-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/arrow/", "title": "arrow", "text": ""}, {"location": "available_software/detail/arrow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which arrow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using arrow, load one of these modules using a module load command like:

                  module load arrow/0.17.1-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty arrow/0.17.1-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-atk/", "title": "at-spi2-atk", "text": ""}, {"location": "available_software/detail/at-spi2-atk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-atk, load one of these modules using a module load command like:

                  module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.3.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-11.2.0 x x x x x x at-spi2-atk/2.38.0-GCCcore-10.3.0 x x x - x x at-spi2-atk/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-atk/2.34.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/at-spi2-core/", "title": "at-spi2-core", "text": ""}, {"location": "available_software/detail/at-spi2-core/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using at-spi2-core, load one of these modules using a module load command like:

                  module load at-spi2-core/2.49.90-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty at-spi2-core/2.49.90-GCCcore-12.3.0 x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x at-spi2-core/2.44.1-GCCcore-11.3.0 x x x x x x at-spi2-core/2.40.3-GCCcore-11.2.0 x x x x x x at-spi2-core/2.40.2-GCCcore-10.3.0 x x x - x x at-spi2-core/2.38.0-GCCcore-10.2.0 x x x x x x at-spi2-core/2.34.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/atools/", "title": "atools", "text": ""}, {"location": "available_software/detail/atools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which atools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using atools, load one of these modules using a module load command like:

                  module load atools/1.5.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty atools/1.5.1-GCCcore-11.2.0 x x x - x x atools/1.4.6-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/attr/", "title": "attr", "text": ""}, {"location": "available_software/detail/attr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attr, load one of these modules using a module load command like:

                  module load attr/2.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attr/2.5.1-GCCcore-11.3.0 x x x x x x attr/2.5.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/attrdict/", "title": "attrdict", "text": ""}, {"location": "available_software/detail/attrdict/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict, load one of these modules using a module load command like:

                  module load attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict/2.0.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/attrdict3/", "title": "attrdict3", "text": ""}, {"location": "available_software/detail/attrdict3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which attrdict3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using attrdict3, load one of these modules using a module load command like:

                  module load attrdict3/2.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty attrdict3/2.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/augur/", "title": "augur", "text": ""}, {"location": "available_software/detail/augur/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which augur installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using augur, load one of these modules using a module load command like:

                  module load augur/7.0.2-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty augur/7.0.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/autopep8/", "title": "autopep8", "text": ""}, {"location": "available_software/detail/autopep8/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which autopep8 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using autopep8, load one of these modules using a module load command like:

                  module load autopep8/2.0.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty autopep8/2.0.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/awscli/", "title": "awscli", "text": ""}, {"location": "available_software/detail/awscli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which awscli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using awscli, load one of these modules using a module load command like:

                  module load awscli/2.11.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty awscli/2.11.21-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/babl/", "title": "babl", "text": ""}, {"location": "available_software/detail/babl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which babl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using babl, load one of these modules using a module load command like:

                  module load babl/0.1.86-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty babl/0.1.86-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/bam-readcount/", "title": "bam-readcount", "text": ""}, {"location": "available_software/detail/bam-readcount/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bam-readcount installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bam-readcount, load one of these modules using a module load command like:

                  module load bam-readcount/0.8.0-GCC-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bam-readcount/0.8.0-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/bamFilters/", "title": "bamFilters", "text": ""}, {"location": "available_software/detail/bamFilters/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bamFilters installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bamFilters, load one of these modules using a module load command like:

                  module load bamFilters/2022-06-30-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bamFilters/2022-06-30-GCC-11.3.0 x x x - x x"}, {"location": "available_software/detail/barrnap/", "title": "barrnap", "text": ""}, {"location": "available_software/detail/barrnap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which barrnap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using barrnap, load one of these modules using a module load command like:

                  module load barrnap/0.9-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty barrnap/0.9-gompi-2021b x x x - x x barrnap/0.9-gompi-2020b - x x x x x"}, {"location": "available_software/detail/basemap/", "title": "basemap", "text": ""}, {"location": "available_software/detail/basemap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which basemap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using basemap, load one of these modules using a module load command like:

                  module load basemap/1.3.9-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty basemap/1.3.9-foss-2023a x x x x x x basemap/1.2.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/bcbio-gff/", "title": "bcbio-gff", "text": ""}, {"location": "available_software/detail/bcbio-gff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcbio-gff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcbio-gff, load one of these modules using a module load command like:

                  module load bcbio-gff/0.7.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcbio-gff/0.7.0-foss-2022b x x x x x x bcbio-gff/0.7.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/bcgTree/", "title": "bcgTree", "text": ""}, {"location": "available_software/detail/bcgTree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcgTree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcgTree, load one of these modules using a module load command like:

                  module load bcgTree/1.2.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcgTree/1.2.0-intel-2021b x x x - x x"}, {"location": "available_software/detail/bcl-convert/", "title": "bcl-convert", "text": ""}, {"location": "available_software/detail/bcl-convert/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl-convert installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl-convert, load one of these modules using a module load command like:

                  module load bcl-convert/4.0.3-2el7.x86_64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl-convert/4.0.3-2el7.x86_64 x x x - x x"}, {"location": "available_software/detail/bcl2fastq2/", "title": "bcl2fastq2", "text": ""}, {"location": "available_software/detail/bcl2fastq2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bcl2fastq2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bcl2fastq2, load one of these modules using a module load command like:

                  module load bcl2fastq2/2.20.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bcl2fastq2/2.20.0-GCC-11.2.0 x x x - x x bcl2fastq2/2.20.0-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/beagle-lib/", "title": "beagle-lib", "text": ""}, {"location": "available_software/detail/beagle-lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which beagle-lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using beagle-lib, load one of these modules using a module load command like:

                  module load beagle-lib/4.0.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty beagle-lib/4.0.0-GCC-11.3.0 x x x x x x beagle-lib/3.1.2-gcccuda-2019b x - - - x - beagle-lib/3.1.2-GCC-11.3.0 x x x - x x beagle-lib/3.1.2-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/binutils/", "title": "binutils", "text": ""}, {"location": "available_software/detail/binutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which binutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using binutils, load one of these modules using a module load command like:

                  module load binutils/2.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty binutils/2.40-GCCcore-13.2.0 x x x x x x binutils/2.40-GCCcore-12.3.0 x x x x x x binutils/2.40 x x x x x x binutils/2.39-GCCcore-12.2.0 x x x x x x binutils/2.39 x x x x x x binutils/2.38-GCCcore-11.3.0 x x x x x x binutils/2.38 x x x x x x binutils/2.37-GCCcore-11.2.0 x x x x x x binutils/2.37 x x x x x x binutils/2.36.1-GCCcore-10.3.0 x x x x x x binutils/2.36.1 x x x x x x binutils/2.35-GCCcore-10.2.0 x x x x x x binutils/2.35 x x x x x x binutils/2.34-GCCcore-9.3.0 x x x x x x binutils/2.34 x x x x x x binutils/2.32-GCCcore-8.3.0 x x x x x x binutils/2.32 x x x x x x binutils/2.31.1-GCCcore-8.2.0 - x - - - - binutils/2.31.1 - x - - - x binutils/2.30 - - - - - x binutils/2.28 x x x x x x"}, {"location": "available_software/detail/biobakery-workflows/", "title": "biobakery-workflows", "text": ""}, {"location": "available_software/detail/biobakery-workflows/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobakery-workflows installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobakery-workflows, load one of these modules using a module load command like:

                  module load biobakery-workflows/3.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobakery-workflows/3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/biobambam2/", "title": "biobambam2", "text": ""}, {"location": "available_software/detail/biobambam2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biobambam2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biobambam2, load one of these modules using a module load command like:

                  module load biobambam2/2.0.185-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biobambam2/2.0.185-GCC-12.3.0 x x x x x x biobambam2/2.0.87-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/biogeme/", "title": "biogeme", "text": ""}, {"location": "available_software/detail/biogeme/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biogeme installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biogeme, load one of these modules using a module load command like:

                  module load biogeme/3.2.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biogeme/3.2.10-foss-2022a x x x - x x biogeme/3.2.6-foss-2022a x x x - x x"}, {"location": "available_software/detail/biom-format/", "title": "biom-format", "text": ""}, {"location": "available_software/detail/biom-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which biom-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using biom-format, load one of these modules using a module load command like:

                  module load biom-format/2.1.15-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty biom-format/2.1.15-foss-2022b x x x x x x biom-format/2.1.14-foss-2022a x x x x x x biom-format/2.1.12-foss-2021b x x x - x x"}, {"location": "available_software/detail/bmtagger/", "title": "bmtagger", "text": ""}, {"location": "available_software/detail/bmtagger/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bmtagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bmtagger, load one of these modules using a module load command like:

                  module load bmtagger/3.101-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bmtagger/3.101-gompi-2020b - x x x x x"}, {"location": "available_software/detail/bokeh/", "title": "bokeh", "text": ""}, {"location": "available_software/detail/bokeh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bokeh, load one of these modules using a module load command like:

                  module load bokeh/3.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bokeh/3.2.2-foss-2023a x x x x x x bokeh/2.4.3-foss-2022a x x x x x x bokeh/2.4.2-foss-2021b x x x x x x bokeh/2.4.1-foss-2021a x x x - x x bokeh/2.2.3-intel-2020b - x x - x x bokeh/2.2.3-fosscuda-2020b x - - - x - bokeh/2.2.3-foss-2020b - x x x x x bokeh/2.0.2-intel-2020a-Python-3.8.2 - x x - x x bokeh/2.0.2-foss-2020a-Python-3.8.2 - x x - x x bokeh/1.4.0-intel-2019b-Python-3.7.4 - x x - x x bokeh/1.4.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/boto3/", "title": "boto3", "text": ""}, {"location": "available_software/detail/boto3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which boto3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using boto3, load one of these modules using a module load command like:

                  module load boto3/1.34.10-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty boto3/1.34.10-GCCcore-12.2.0 x x x x x x boto3/1.26.163-GCCcore-12.2.0 x x x x x x boto3/1.20.13-GCCcore-11.2.0 x x x - x x boto3/1.20.13-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/bpp/", "title": "bpp", "text": ""}, {"location": "available_software/detail/bpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bpp, load one of these modules using a module load command like:

                  module load bpp/4.4.0-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bpp/4.4.0-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/btllib/", "title": "btllib", "text": ""}, {"location": "available_software/detail/btllib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which btllib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using btllib, load one of these modules using a module load command like:

                  module load btllib/1.7.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty btllib/1.7.0-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/build/", "title": "build", "text": ""}, {"location": "available_software/detail/build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using build, load one of these modules using a module load command like:

                  module load build/0.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty build/0.10.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/buildenv/", "title": "buildenv", "text": ""}, {"location": "available_software/detail/buildenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildenv, load one of these modules using a module load command like:

                  module load buildenv/default-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildenv/default-intel-2019b - x x - x x buildenv/default-foss-2019b - x x - x x"}, {"location": "available_software/detail/buildingspy/", "title": "buildingspy", "text": ""}, {"location": "available_software/detail/buildingspy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which buildingspy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using buildingspy, load one of these modules using a module load command like:

                  module load buildingspy/4.0.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty buildingspy/4.0.0-foss-2022a x x x - x x"}, {"location": "available_software/detail/bwa-meth/", "title": "bwa-meth", "text": ""}, {"location": "available_software/detail/bwa-meth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwa-meth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwa-meth, load one of these modules using a module load command like:

                  module load bwa-meth/0.2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwa-meth/0.2.6-GCC-11.3.0 x x x x x x bwa-meth/0.2.2-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/bwidget/", "title": "bwidget", "text": ""}, {"location": "available_software/detail/bwidget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bwidget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bwidget, load one of these modules using a module load command like:

                  module load bwidget/1.9.15-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bwidget/1.9.15-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/bx-python/", "title": "bx-python", "text": ""}, {"location": "available_software/detail/bx-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bx-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bx-python, load one of these modules using a module load command like:

                  module load bx-python/0.10.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bx-python/0.10.0-foss-2023a x x x x x x bx-python/0.9.0-foss-2022a x x x x x x bx-python/0.8.13-foss-2021b x x x - x x bx-python/0.8.9-foss-2020a-Python-3.8.2 - - x - x x"}, {"location": "available_software/detail/bzip2/", "title": "bzip2", "text": ""}, {"location": "available_software/detail/bzip2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which bzip2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using bzip2, load one of these modules using a module load command like:

                  module load bzip2/1.0.8-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty bzip2/1.0.8-GCCcore-13.2.0 x x x x x x bzip2/1.0.8-GCCcore-12.3.0 x x x x x x bzip2/1.0.8-GCCcore-12.2.0 x x x x x x bzip2/1.0.8-GCCcore-11.3.0 x x x x x x bzip2/1.0.8-GCCcore-11.2.0 x x x x x x bzip2/1.0.8-GCCcore-10.3.0 x x x x x x bzip2/1.0.8-GCCcore-10.2.0 x x x x x x bzip2/1.0.8-GCCcore-9.3.0 x x x x x x bzip2/1.0.8-GCCcore-8.3.0 x x x x x x bzip2/1.0.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/c-ares/", "title": "c-ares", "text": ""}, {"location": "available_software/detail/c-ares/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which c-ares installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using c-ares, load one of these modules using a module load command like:

                  module load c-ares/1.18.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty c-ares/1.18.1-GCCcore-11.2.0 x x x x x x c-ares/1.17.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/cURL/", "title": "cURL", "text": ""}, {"location": "available_software/detail/cURL/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cURL, load one of these modules using a module load command like:

                  module load cURL/8.3.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cURL/8.3.0-GCCcore-13.2.0 x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x cURL/7.83.0-GCCcore-11.3.0 x x x x x x cURL/7.78.0-GCCcore-11.2.0 x x x x x x cURL/7.76.0-GCCcore-10.3.0 x x x x x x cURL/7.72.0-GCCcore-10.2.0 x x x x x x cURL/7.69.1-GCCcore-9.3.0 x x x x x x cURL/7.66.0-GCCcore-8.3.0 x x x x x x cURL/7.63.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/cairo/", "title": "cairo", "text": ""}, {"location": "available_software/detail/cairo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cairo, load one of these modules using a module load command like:

                  module load cairo/1.17.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cairo/1.17.8-GCCcore-12.3.0 x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x cairo/1.17.4-GCCcore-11.3.0 x x x x x x cairo/1.16.0-GCCcore-11.2.0 x x x x x x cairo/1.16.0-GCCcore-10.3.0 x x x x x x cairo/1.16.0-GCCcore-10.2.0 x x x x x x cairo/1.16.0-GCCcore-9.3.0 x x x x x x cairo/1.16.0-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/canu/", "title": "canu", "text": ""}, {"location": "available_software/detail/canu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which canu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using canu, load one of these modules using a module load command like:

                  module load canu/2.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty canu/2.2-GCCcore-11.2.0 x x x - x x canu/2.2-GCCcore-10.3.0 - x x - x x canu/2.1.1-GCCcore-10.2.0 - x x - x x canu/1.9-GCCcore-8.3.0-Java-11 - - x - x -"}, {"location": "available_software/detail/carputils/", "title": "carputils", "text": ""}, {"location": "available_software/detail/carputils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which carputils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using carputils, load one of these modules using a module load command like:

                  module load carputils/20210513-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty carputils/20210513-foss-2020b - x x x x x carputils/20200915-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/ccache/", "title": "ccache", "text": ""}, {"location": "available_software/detail/ccache/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ccache installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ccache, load one of these modules using a module load command like:

                  module load ccache/4.6.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ccache/4.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/cctbx-base/", "title": "cctbx-base", "text": ""}, {"location": "available_software/detail/cctbx-base/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cctbx-base installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cctbx-base, load one of these modules using a module load command like:

                  module load cctbx-base/2023.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cctbx-base/2023.5-foss-2022a - - x - x - cctbx-base/2020.8-fosscuda-2020b x - - - x - cctbx-base/2020.8-foss-2020b x x x x x x"}, {"location": "available_software/detail/cdbfasta/", "title": "cdbfasta", "text": ""}, {"location": "available_software/detail/cdbfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdbfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdbfasta, load one of these modules using a module load command like:

                  module load cdbfasta/0.99-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdbfasta/0.99-iccifort-2019.5.281 - x x - x - cdbfasta/0.99-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/cdo-bindings/", "title": "cdo-bindings", "text": ""}, {"location": "available_software/detail/cdo-bindings/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdo-bindings installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdo-bindings, load one of these modules using a module load command like:

                  module load cdo-bindings/1.5.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdo-bindings/1.5.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/cdsapi/", "title": "cdsapi", "text": ""}, {"location": "available_software/detail/cdsapi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cdsapi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cdsapi, load one of these modules using a module load command like:

                  module load cdsapi/0.5.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cdsapi/0.5.1-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/cell2location/", "title": "cell2location", "text": ""}, {"location": "available_software/detail/cell2location/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cell2location installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cell2location, load one of these modules using a module load command like:

                  module load cell2location/0.05-alpha-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cell2location/0.05-alpha-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/cffi/", "title": "cffi", "text": ""}, {"location": "available_software/detail/cffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cffi, load one of these modules using a module load command like:

                  module load cffi/1.15.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cffi/1.15.1-GCCcore-13.2.0 x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x cffi/1.15.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/chemprop/", "title": "chemprop", "text": ""}, {"location": "available_software/detail/chemprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chemprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chemprop, load one of these modules using a module load command like:

                  module load chemprop/1.5.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chemprop/1.5.2-foss-2022a-CUDA-11.7.0 x - - - x - chemprop/1.5.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/chewBBACA/", "title": "chewBBACA", "text": ""}, {"location": "available_software/detail/chewBBACA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which chewBBACA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using chewBBACA, load one of these modules using a module load command like:

                  module load chewBBACA/2.5.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty chewBBACA/2.5.5-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/cicero/", "title": "cicero", "text": ""}, {"location": "available_software/detail/cicero/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cicero installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cicero, load one of these modules using a module load command like:

                  module load cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cicero/1.3.8-foss-2022a-R-4.2.1-Monocle3 x x x x x x cicero/1.3.4.11-foss-2020b-R-4.0.3-Monocle3 - x x x x x"}, {"location": "available_software/detail/cimfomfa/", "title": "cimfomfa", "text": ""}, {"location": "available_software/detail/cimfomfa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cimfomfa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cimfomfa, load one of these modules using a module load command like:

                  module load cimfomfa/22.273-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cimfomfa/22.273-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/code-cli/", "title": "code-cli", "text": ""}, {"location": "available_software/detail/code-cli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-cli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-cli, load one of these modules using a module load command like:

                  module load code-cli/1.85.1-x64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-cli/1.85.1-x64 x x x x x x"}, {"location": "available_software/detail/code-server/", "title": "code-server", "text": ""}, {"location": "available_software/detail/code-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which code-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using code-server, load one of these modules using a module load command like:

                  module load code-server/4.9.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty code-server/4.9.1 x x x x x x"}, {"location": "available_software/detail/colossalai/", "title": "colossalai", "text": ""}, {"location": "available_software/detail/colossalai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which colossalai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using colossalai, load one of these modules using a module load command like:

                  module load colossalai/0.1.8-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty colossalai/0.1.8-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/conan/", "title": "conan", "text": ""}, {"location": "available_software/detail/conan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which conan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using conan, load one of these modules using a module load command like:

                  module load conan/1.60.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty conan/1.60.2-GCCcore-12.3.0 x x x x x x conan/1.58.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/configurable-http-proxy/", "title": "configurable-http-proxy", "text": ""}, {"location": "available_software/detail/configurable-http-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which configurable-http-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using configurable-http-proxy, load one of these modules using a module load command like:

                  module load configurable-http-proxy/4.5.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty configurable-http-proxy/4.5.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/cooler/", "title": "cooler", "text": ""}, {"location": "available_software/detail/cooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cooler, load one of these modules using a module load command like:

                  module load cooler/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cooler/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/coverage/", "title": "coverage", "text": ""}, {"location": "available_software/detail/coverage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which coverage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using coverage, load one of these modules using a module load command like:

                  module load coverage/7.2.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty coverage/7.2.7-GCCcore-11.3.0 x x x x x x coverage/5.5-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/cppy/", "title": "cppy", "text": ""}, {"location": "available_software/detail/cppy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cppy, load one of these modules using a module load command like:

                  module load cppy/1.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cppy/1.2.1-GCCcore-12.3.0 x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x cppy/1.2.1-GCCcore-11.3.0 x x x x x x cppy/1.1.0-GCCcore-11.2.0 x x x x x x cppy/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/cpu_features/", "title": "cpu_features", "text": ""}, {"location": "available_software/detail/cpu_features/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cpu_features installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cpu_features, load one of these modules using a module load command like:

                  module load cpu_features/0.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cpu_features/0.6.0-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/cryoDRGN/", "title": "cryoDRGN", "text": ""}, {"location": "available_software/detail/cryoDRGN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryoDRGN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryoDRGN, load one of these modules using a module load command like:

                  module load cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryoDRGN/1.0.0-beta-foss-2021a-CUDA-11.3.1 x - - - x - cryoDRGN/0.3.5-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/cryptography/", "title": "cryptography", "text": ""}, {"location": "available_software/detail/cryptography/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cryptography, load one of these modules using a module load command like:

                  module load cryptography/41.0.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cryptography/41.0.5-GCCcore-13.2.0 x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/cuDNN/", "title": "cuDNN", "text": ""}, {"location": "available_software/detail/cuDNN/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuDNN installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuDNN, load one of these modules using a module load command like:

                  module load cuDNN/8.9.2.26-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuDNN/8.9.2.26-CUDA-12.1.1 x - x - x - cuDNN/8.4.1.50-CUDA-11.7.0 x - x - x - cuDNN/8.2.2.26-CUDA-11.4.1 x - - - x - cuDNN/8.2.1.32-CUDA-11.3.1 x x x - x x cuDNN/8.0.4.30-CUDA-11.1.1 x - - - x x"}, {"location": "available_software/detail/cuTENSOR/", "title": "cuTENSOR", "text": ""}, {"location": "available_software/detail/cuTENSOR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuTENSOR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuTENSOR, load one of these modules using a module load command like:

                  module load cuTENSOR/1.2.2.5-CUDA-11.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuTENSOR/1.2.2.5-CUDA-11.1.1 - - - - x -"}, {"location": "available_software/detail/cutadapt/", "title": "cutadapt", "text": ""}, {"location": "available_software/detail/cutadapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cutadapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cutadapt, load one of these modules using a module load command like:

                  module load cutadapt/4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cutadapt/4.2-GCCcore-11.3.0 x x x x x x cutadapt/3.5-GCCcore-11.2.0 x x x - x x cutadapt/3.4-GCCcore-10.2.0 - x x x x x cutadapt/2.10-GCCcore-9.3.0-Python-3.8.2 - x x - x x cutadapt/2.7-GCCcore-8.3.0-Python-3.7.4 - x x - x x cutadapt/1.18-GCCcore-8.3.0-Python-2.7.16 - x x - x x cutadapt/1.18-GCCcore-8.3.0 - x x - x x cutadapt/1.18-GCC-10.2.0-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/cuteSV/", "title": "cuteSV", "text": ""}, {"location": "available_software/detail/cuteSV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cuteSV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cuteSV, load one of these modules using a module load command like:

                  module load cuteSV/2.0.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cuteSV/2.0.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/cython-blis/", "title": "cython-blis", "text": ""}, {"location": "available_software/detail/cython-blis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which cython-blis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using cython-blis, load one of these modules using a module load command like:

                  module load cython-blis/0.9.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty cython-blis/0.9.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dask/", "title": "dask", "text": ""}, {"location": "available_software/detail/dask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dask, load one of these modules using a module load command like:

                  module load dask/2023.12.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dask/2023.12.1-foss-2023a x x x x x x dask/2022.10.0-foss-2022a x x x x x x dask/2022.1.0-foss-2021b x x x x x x dask/2021.9.1-foss-2021a x x x - x x dask/2021.2.0-intel-2020b - x x - x x dask/2021.2.0-fosscuda-2020b x - - - x - dask/2021.2.0-foss-2020b - x x x x x dask/2.18.1-intel-2020a-Python-3.8.2 - x x - x x dask/2.18.1-foss-2020a-Python-3.8.2 - x x - x x dask/2.8.0-intel-2019b-Python-3.7.4 - x x - x x dask/2.8.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dbus-glib/", "title": "dbus-glib", "text": ""}, {"location": "available_software/detail/dbus-glib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dbus-glib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dbus-glib, load one of these modules using a module load command like:

                  module load dbus-glib/0.112-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dbus-glib/0.112-GCCcore-11.2.0 x x x x x x dbus-glib/0.112-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/dclone/", "title": "dclone", "text": ""}, {"location": "available_software/detail/dclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dclone, load one of these modules using a module load command like:

                  module load dclone/2.3-0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dclone/2.3-0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/deal.II/", "title": "deal.II", "text": ""}, {"location": "available_software/detail/deal.II/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deal.II installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deal.II, load one of these modules using a module load command like:

                  module load deal.II/9.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deal.II/9.3.3-foss-2021a - x x - x x"}, {"location": "available_software/detail/decona/", "title": "decona", "text": ""}, {"location": "available_software/detail/decona/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which decona installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using decona, load one of these modules using a module load command like:

                  module load decona/0.1.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty decona/0.1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepTools/", "title": "deepTools", "text": ""}, {"location": "available_software/detail/deepTools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepTools, load one of these modules using a module load command like:

                  module load deepTools/3.5.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepTools/3.5.1-foss-2021b x x x - x x deepTools/3.3.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/deepdiff/", "title": "deepdiff", "text": ""}, {"location": "available_software/detail/deepdiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which deepdiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using deepdiff, load one of these modules using a module load command like:

                  module load deepdiff/6.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty deepdiff/6.7.1-GCCcore-12.3.0 x x x x x x deepdiff/6.7.1-GCCcore-12.2.0 x x x x x x deepdiff/5.8.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/detectron2/", "title": "detectron2", "text": ""}, {"location": "available_software/detail/detectron2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which detectron2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using detectron2, load one of these modules using a module load command like:

                  module load detectron2/0.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty detectron2/0.6-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/devbio-napari/", "title": "devbio-napari", "text": ""}, {"location": "available_software/detail/devbio-napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which devbio-napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using devbio-napari, load one of these modules using a module load command like:

                  module load devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty devbio-napari/0.10.1-foss-2022a-CUDA-11.7.0 x - - - x - devbio-napari/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dicom2nifti/", "title": "dicom2nifti", "text": ""}, {"location": "available_software/detail/dicom2nifti/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dicom2nifti installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dicom2nifti, load one of these modules using a module load command like:

                  module load dicom2nifti/2.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dicom2nifti/2.3.0-fosscuda-2020b x - - - x - dicom2nifti/2.3.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/dijitso/", "title": "dijitso", "text": ""}, {"location": "available_software/detail/dijitso/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dijitso installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dijitso, load one of these modules using a module load command like:

                  module load dijitso/2019.1.0-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dijitso/2019.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/dill/", "title": "dill", "text": ""}, {"location": "available_software/detail/dill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dill, load one of these modules using a module load command like:

                  module load dill/0.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dill/0.3.7-GCCcore-12.3.0 x x x x x x dill/0.3.7-GCCcore-12.2.0 x x x x x x dill/0.3.6-GCCcore-11.3.0 x x x x x x dill/0.3.4-GCCcore-11.2.0 x x x x x x dill/0.3.4-GCCcore-10.3.0 x x x - x x dill/0.3.3-GCCcore-10.2.0 - x x x x x dill/0.3.3-GCCcore-9.3.0 - x x - - x"}, {"location": "available_software/detail/dlib/", "title": "dlib", "text": ""}, {"location": "available_software/detail/dlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dlib, load one of these modules using a module load command like:

                  module load dlib/19.22-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dlib/19.22-foss-2021a-CUDA-11.3.1 - - - - x - dlib/19.22-foss-2021a - x x - x x"}, {"location": "available_software/detail/dm-haiku/", "title": "dm-haiku", "text": ""}, {"location": "available_software/detail/dm-haiku/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-haiku installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-haiku, load one of these modules using a module load command like:

                  module load dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-haiku/0.0.9-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/dm-tree/", "title": "dm-tree", "text": ""}, {"location": "available_software/detail/dm-tree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dm-tree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dm-tree, load one of these modules using a module load command like:

                  module load dm-tree/0.1.8-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dm-tree/0.1.8-GCCcore-11.3.0 x x x x x x dm-tree/0.1.6-GCCcore-10.3.0 x x x x x x dm-tree/0.1.5-GCCcore-10.2.0 x x x x x x dm-tree/0.1.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/dorado/", "title": "dorado", "text": ""}, {"location": "available_software/detail/dorado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dorado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dorado, load one of these modules using a module load command like:

                  module load dorado/0.5.1-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dorado/0.5.1-foss-2022a-CUDA-11.7.0 x - x - x - dorado/0.3.1-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.3.0-foss-2022a-CUDA-11.7.0 x - - - x - dorado/0.1.1-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/double-conversion/", "title": "double-conversion", "text": ""}, {"location": "available_software/detail/double-conversion/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using double-conversion, load one of these modules using a module load command like:

                  module load double-conversion/3.3.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x double-conversion/3.2.0-GCCcore-11.3.0 x x x x x x double-conversion/3.1.5-GCCcore-11.2.0 x x x x x x double-conversion/3.1.5-GCCcore-10.3.0 x x x x x x double-conversion/3.1.5-GCCcore-10.2.0 x x x x x x double-conversion/3.1.5-GCCcore-9.3.0 - x x - x x double-conversion/3.1.4-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/drmaa-python/", "title": "drmaa-python", "text": ""}, {"location": "available_software/detail/drmaa-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which drmaa-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using drmaa-python, load one of these modules using a module load command like:

                  module load drmaa-python/0.7.9-GCCcore-12.2.0-slurm\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty drmaa-python/0.7.9-GCCcore-12.2.0-slurm x x x x x x"}, {"location": "available_software/detail/dtcwt/", "title": "dtcwt", "text": ""}, {"location": "available_software/detail/dtcwt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dtcwt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dtcwt, load one of these modules using a module load command like:

                  module load dtcwt/0.12.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dtcwt/0.12.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/duplex-tools/", "title": "duplex-tools", "text": ""}, {"location": "available_software/detail/duplex-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which duplex-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using duplex-tools, load one of these modules using a module load command like:

                  module load duplex-tools/0.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty duplex-tools/0.3.3-foss-2022a x x x x x x duplex-tools/0.3.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/dynesty/", "title": "dynesty", "text": ""}, {"location": "available_software/detail/dynesty/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which dynesty installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using dynesty, load one of these modules using a module load command like:

                  module load dynesty/2.1.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty dynesty/2.1.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/eSpeak-NG/", "title": "eSpeak-NG", "text": ""}, {"location": "available_software/detail/eSpeak-NG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eSpeak-NG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eSpeak-NG, load one of these modules using a module load command like:

                  module load eSpeak-NG/1.50-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eSpeak-NG/1.50-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ebGSEA/", "title": "ebGSEA", "text": ""}, {"location": "available_software/detail/ebGSEA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ebGSEA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ebGSEA, load one of these modules using a module load command like:

                  module load ebGSEA/0.1.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ebGSEA/0.1.0-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/ecCodes/", "title": "ecCodes", "text": ""}, {"location": "available_software/detail/ecCodes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ecCodes, load one of these modules using a module load command like:

                  module load ecCodes/2.24.2-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ecCodes/2.24.2-gompi-2021b x x x x x x ecCodes/2.22.1-gompi-2021a x x x - x x ecCodes/2.15.0-iimpi-2019b - x x - x x"}, {"location": "available_software/detail/edlib/", "title": "edlib", "text": ""}, {"location": "available_software/detail/edlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which edlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using edlib, load one of these modules using a module load command like:

                  module load edlib/1.3.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty edlib/1.3.9-GCC-11.3.0 x x x x x x edlib/1.3.9-GCC-11.2.0 x x x - x x edlib/1.3.9-GCC-10.3.0 x x x - x x edlib/1.3.9-GCC-10.2.0 - x x x x x edlib/1.3.8.post2-iccifort-2020.1.217-Python-3.8.2 - x x - x - edlib/1.3.8.post1-iccifort-2019.5.281-Python-3.7.4 - x x - x - edlib/1.3.8.post1-GCC-9.3.0-Python-3.8.2 - x x - x x edlib/1.3.8.post1-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/eggnog-mapper/", "title": "eggnog-mapper", "text": ""}, {"location": "available_software/detail/eggnog-mapper/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which eggnog-mapper installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using eggnog-mapper, load one of these modules using a module load command like:

                  module load eggnog-mapper/2.1.10-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty eggnog-mapper/2.1.10-foss-2020b x x x x x x eggnog-mapper/2.1.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/einops/", "title": "einops", "text": ""}, {"location": "available_software/detail/einops/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which einops installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using einops, load one of these modules using a module load command like:

                  module load einops/0.4.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty einops/0.4.1-GCCcore-11.3.0 x x x x x x einops/0.4.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/elfutils/", "title": "elfutils", "text": ""}, {"location": "available_software/detail/elfutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elfutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elfutils, load one of these modules using a module load command like:

                  module load elfutils/0.187-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elfutils/0.187-GCCcore-11.3.0 x x x x x x elfutils/0.185-GCCcore-11.2.0 x x x x x x elfutils/0.185-GCCcore-10.3.0 x x x x x x elfutils/0.185-GCCcore-8.3.0 x - - - x - elfutils/0.183-GCCcore-10.2.0 - - x x x -"}, {"location": "available_software/detail/elprep/", "title": "elprep", "text": ""}, {"location": "available_software/detail/elprep/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which elprep installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using elprep, load one of these modules using a module load command like:

                  module load elprep/5.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty elprep/5.1.1 - x x - x -"}, {"location": "available_software/detail/enchant-2/", "title": "enchant-2", "text": ""}, {"location": "available_software/detail/enchant-2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which enchant-2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using enchant-2, load one of these modules using a module load command like:

                  module load enchant-2/2.3.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty enchant-2/2.3.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/epiScanpy/", "title": "epiScanpy", "text": ""}, {"location": "available_software/detail/epiScanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which epiScanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using epiScanpy, load one of these modules using a module load command like:

                  module load epiScanpy/0.4.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty epiScanpy/0.4.0-foss-2022a x x x x x x epiScanpy/0.3.1-foss-2021a - x x - x x"}, {"location": "available_software/detail/exiv2/", "title": "exiv2", "text": ""}, {"location": "available_software/detail/exiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which exiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using exiv2, load one of these modules using a module load command like:

                  module load exiv2/0.27.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty exiv2/0.27.5-GCCcore-11.2.0 x x x x x x exiv2/0.27.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/expat/", "title": "expat", "text": ""}, {"location": "available_software/detail/expat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expat, load one of these modules using a module load command like:

                  module load expat/2.5.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expat/2.5.0-GCCcore-13.2.0 x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x expat/2.4.8-GCCcore-11.3.0 x x x x x x expat/2.4.1-GCCcore-11.2.0 x x x x x x expat/2.2.9-GCCcore-10.3.0 x x x x x x expat/2.2.9-GCCcore-10.2.0 x x x x x x expat/2.2.9-GCCcore-9.3.0 x x x x x x expat/2.2.7-GCCcore-8.3.0 x x x x x x expat/2.2.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/expecttest/", "title": "expecttest", "text": ""}, {"location": "available_software/detail/expecttest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using expecttest, load one of these modules using a module load command like:

                  module load expecttest/0.1.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty expecttest/0.1.5-GCCcore-12.3.0 x x x x x x expecttest/0.1.3-GCCcore-12.2.0 x x x x x x expecttest/0.1.3-GCCcore-11.3.0 x x x x x x expecttest/0.1.3-GCCcore-11.2.0 x x x x x x expecttest/0.1.3-GCCcore-10.3.0 x x x x x x expecttest/0.1.3-GCCcore-10.2.0 x - - - - -"}, {"location": "available_software/detail/fasta-reader/", "title": "fasta-reader", "text": ""}, {"location": "available_software/detail/fasta-reader/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fasta-reader installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fasta-reader, load one of these modules using a module load command like:

                  module load fasta-reader/3.0.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fasta-reader/3.0.2-GCC-12.3.0 x x x x x x"}, {"location": "available_software/detail/fastahack/", "title": "fastahack", "text": ""}, {"location": "available_software/detail/fastahack/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastahack installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastahack, load one of these modules using a module load command like:

                  module load fastahack/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastahack/1.0.0-GCCcore-11.3.0 x x x x x x fastahack/1.0.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/fastai/", "title": "fastai", "text": ""}, {"location": "available_software/detail/fastai/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastai installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastai, load one of these modules using a module load command like:

                  module load fastai/2.7.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastai/2.7.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/fastp/", "title": "fastp", "text": ""}, {"location": "available_software/detail/fastp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fastp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fastp, load one of these modules using a module load command like:

                  module load fastp/0.23.2-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fastp/0.23.2-GCC-11.2.0 x x x - x x fastp/0.20.1-iccifort-2020.1.217 - x x - x - fastp/0.20.0-iccifort-2019.5.281 - x - - - - fastp/0.20.0-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/fermi-lite/", "title": "fermi-lite", "text": ""}, {"location": "available_software/detail/fermi-lite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fermi-lite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fermi-lite, load one of these modules using a module load command like:

                  module load fermi-lite/20190320-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fermi-lite/20190320-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/festival/", "title": "festival", "text": ""}, {"location": "available_software/detail/festival/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which festival installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using festival, load one of these modules using a module load command like:

                  module load festival/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty festival/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/fetchMG/", "title": "fetchMG", "text": ""}, {"location": "available_software/detail/fetchMG/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fetchMG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fetchMG, load one of these modules using a module load command like:

                  module load fetchMG/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fetchMG/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ffnvcodec/", "title": "ffnvcodec", "text": ""}, {"location": "available_software/detail/ffnvcodec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ffnvcodec, load one of these modules using a module load command like:

                  module load ffnvcodec/12.0.16.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ffnvcodec/12.0.16.0 x x x x x x ffnvcodec/11.1.5.2 x x x x x x"}, {"location": "available_software/detail/file/", "title": "file", "text": ""}, {"location": "available_software/detail/file/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which file installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using file, load one of these modules using a module load command like:

                  module load file/5.43-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty file/5.43-GCCcore-11.3.0 x x x x x x file/5.41-GCCcore-11.2.0 x x x x x x file/5.39-GCCcore-10.2.0 - x x x x x file/5.38-GCCcore-9.3.0 - x x - x x file/5.38-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/filevercmp/", "title": "filevercmp", "text": ""}, {"location": "available_software/detail/filevercmp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which filevercmp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using filevercmp, load one of these modules using a module load command like:

                  module load filevercmp/20191210-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty filevercmp/20191210-GCCcore-11.3.0 x x x x x x filevercmp/20191210-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/finder/", "title": "finder", "text": ""}, {"location": "available_software/detail/finder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which finder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using finder, load one of these modules using a module load command like:

                  module load finder/1.1.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty finder/1.1.0-foss-2021b x x x x x x"}, {"location": "available_software/detail/flair-NLP/", "title": "flair-NLP", "text": ""}, {"location": "available_software/detail/flair-NLP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flair-NLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flair-NLP, load one of these modules using a module load command like:

                  module load flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flair-NLP/0.11.3-foss-2021a-CUDA-11.3.1 x - - - x - flair-NLP/0.11.3-foss-2021a x x x - x x"}, {"location": "available_software/detail/flatbuffers-python/", "title": "flatbuffers-python", "text": ""}, {"location": "available_software/detail/flatbuffers-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers-python, load one of these modules using a module load command like:

                  module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers-python/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.3.0 x x x x x x flatbuffers-python/2.0-GCCcore-11.2.0 x x x x x x flatbuffers-python/2.0-GCCcore-10.3.0 x x x x x x flatbuffers-python/1.12-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/flatbuffers/", "title": "flatbuffers", "text": ""}, {"location": "available_software/detail/flatbuffers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flatbuffers, load one of these modules using a module load command like:

                  module load flatbuffers/23.5.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x flatbuffers/23.1.4-GCCcore-12.2.0 x x x x x x flatbuffers/2.0.7-GCCcore-11.3.0 x x x x x x flatbuffers/2.0.0-GCCcore-11.2.0 x x x x x x flatbuffers/2.0.0-GCCcore-10.3.0 x x x x x x flatbuffers/1.12.0-GCCcore-10.2.0 x x x x x x flatbuffers/1.12.0-GCCcore-9.3.0 - x x - x x flatbuffers/1.12.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flex/", "title": "flex", "text": ""}, {"location": "available_software/detail/flex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flex, load one of these modules using a module load command like:

                  module load flex/2.6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flex/2.6.4-GCCcore-13.2.0 x x x x x x flex/2.6.4-GCCcore-12.3.0 x x x x x x flex/2.6.4-GCCcore-12.2.0 x x x x x x flex/2.6.4-GCCcore-11.3.0 x x x x x x flex/2.6.4-GCCcore-11.2.0 x x x x x x flex/2.6.4-GCCcore-10.3.0 x x x x x x flex/2.6.4-GCCcore-10.2.0 x x x x x x flex/2.6.4-GCCcore-9.3.0 x x x x x x flex/2.6.4-GCCcore-8.3.0 x x x x x x flex/2.6.4-GCCcore-8.2.0 - x - - - - flex/2.6.4 x x x x x x flex/2.6.3 x x x x x x flex/2.5.39-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/flit/", "title": "flit", "text": ""}, {"location": "available_software/detail/flit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flit, load one of these modules using a module load command like:

                  module load flit/3.9.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flit/3.9.0-GCCcore-13.2.0 x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/flowFDA/", "title": "flowFDA", "text": ""}, {"location": "available_software/detail/flowFDA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which flowFDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using flowFDA, load one of these modules using a module load command like:

                  module load flowFDA/0.99-20220602-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty flowFDA/0.99-20220602-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/fmt/", "title": "fmt", "text": ""}, {"location": "available_software/detail/fmt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fmt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fmt, load one of these modules using a module load command like:

                  module load fmt/10.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fmt/10.1.0-GCCcore-12.3.0 x x x x x x fmt/8.1.1-GCCcore-11.2.0 x x x - x x fmt/7.1.1-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/fontconfig/", "title": "fontconfig", "text": ""}, {"location": "available_software/detail/fontconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fontconfig, load one of these modules using a module load command like:

                  module load fontconfig/2.14.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x fontconfig/2.14.0-GCCcore-11.3.0 x x x x x x fontconfig/2.13.94-GCCcore-11.2.0 x x x x x x fontconfig/2.13.93-GCCcore-10.3.0 x x x x x x fontconfig/2.13.92-GCCcore-10.2.0 x x x x x x fontconfig/2.13.92-GCCcore-9.3.0 x x x x x x fontconfig/2.13.1-GCCcore-8.3.0 x x x - x x fontconfig/2.13.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/foss/", "title": "foss", "text": ""}, {"location": "available_software/detail/foss/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using foss, load one of these modules using a module load command like:

                  module load foss/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty foss/2023b x x x x x x foss/2023a x x x x x x foss/2022b x x x x x x foss/2022a x x x x x x foss/2021b x x x x x x foss/2021a x x x x x x foss/2020b x x x x x x foss/2020a - x x - x x foss/2019b x x x - x x"}, {"location": "available_software/detail/fosscuda/", "title": "fosscuda", "text": ""}, {"location": "available_software/detail/fosscuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fosscuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fosscuda, load one of these modules using a module load command like:

                  module load fosscuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fosscuda/2020b x - - - x -"}, {"location": "available_software/detail/freebayes/", "title": "freebayes", "text": ""}, {"location": "available_software/detail/freebayes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freebayes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freebayes, load one of these modules using a module load command like:

                  module load freebayes/1.3.5-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freebayes/1.3.5-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/freeglut/", "title": "freeglut", "text": ""}, {"location": "available_software/detail/freeglut/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freeglut installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freeglut, load one of these modules using a module load command like:

                  module load freeglut/3.2.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freeglut/3.2.2-GCCcore-11.3.0 x x x x x x freeglut/3.2.1-GCCcore-11.2.0 x x x x x x freeglut/3.2.1-GCCcore-10.3.0 - x x - x x freeglut/3.2.1-GCCcore-10.2.0 - x x x x x freeglut/3.2.1-GCCcore-9.3.0 - x x - x x freeglut/3.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/freetype-py/", "title": "freetype-py", "text": ""}, {"location": "available_software/detail/freetype-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype-py, load one of these modules using a module load command like:

                  module load freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype-py/2.2.0-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/freetype/", "title": "freetype", "text": ""}, {"location": "available_software/detail/freetype/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using freetype, load one of these modules using a module load command like:

                  module load freetype/2.13.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty freetype/2.13.2-GCCcore-13.2.0 x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x freetype/2.12.1-GCCcore-11.3.0 x x x x x x freetype/2.11.0-GCCcore-11.2.0 x x x x x x freetype/2.10.4-GCCcore-10.3.0 x x x x x x freetype/2.10.3-GCCcore-10.2.0 x x x x x x freetype/2.10.1-GCCcore-9.3.0 x x x x x x freetype/2.10.1-GCCcore-8.3.0 x x x - x x freetype/2.9.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/fsom/", "title": "fsom", "text": ""}, {"location": "available_software/detail/fsom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which fsom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using fsom, load one of these modules using a module load command like:

                  module load fsom/20151117-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty fsom/20151117-GCCcore-11.3.0 x x x x x x fsom/20141119-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/funannotate/", "title": "funannotate", "text": ""}, {"location": "available_software/detail/funannotate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which funannotate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using funannotate, load one of these modules using a module load command like:

                  module load funannotate/1.8.13-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty funannotate/1.8.13-foss-2021b x x x x x x"}, {"location": "available_software/detail/g2clib/", "title": "g2clib", "text": ""}, {"location": "available_software/detail/g2clib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2clib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2clib, load one of these modules using a module load command like:

                  module load g2clib/1.6.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2clib/1.6.0-GCCcore-9.3.0 - x x - x x g2clib/1.6.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2lib/", "title": "g2lib", "text": ""}, {"location": "available_software/detail/g2lib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2lib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2lib, load one of these modules using a module load command like:

                  module load g2lib/3.1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2lib/3.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/g2log/", "title": "g2log", "text": ""}, {"location": "available_software/detail/g2log/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which g2log installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using g2log, load one of these modules using a module load command like:

                  module load g2log/1.0-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty g2log/1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/garnett/", "title": "garnett", "text": ""}, {"location": "available_software/detail/garnett/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which garnett installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using garnett, load one of these modules using a module load command like:

                  module load garnett/0.1.20-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty garnett/0.1.20-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/gawk/", "title": "gawk", "text": ""}, {"location": "available_software/detail/gawk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gawk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gawk, load one of these modules using a module load command like:

                  module load gawk/5.1.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gawk/5.1.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/gbasis/", "title": "gbasis", "text": ""}, {"location": "available_software/detail/gbasis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gbasis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gbasis, load one of these modules using a module load command like:

                  module load gbasis/20210904-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gbasis/20210904-intel-2022a x x x x x x"}, {"location": "available_software/detail/gc/", "title": "gc", "text": ""}, {"location": "available_software/detail/gc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gc, load one of these modules using a module load command like:

                  module load gc/8.2.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gc/8.2.0-GCCcore-11.2.0 x x x x x x gc/8.0.4-GCCcore-10.3.0 - x x - x x gc/7.6.12-GCCcore-9.3.0 - x x - x x gc/7.6.12-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gcccuda/", "title": "gcccuda", "text": ""}, {"location": "available_software/detail/gcccuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcccuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcccuda, load one of these modules using a module load command like:

                  module load gcccuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcccuda/2020b x x x x x x gcccuda/2019b x - - - x -"}, {"location": "available_software/detail/gcloud/", "title": "gcloud", "text": ""}, {"location": "available_software/detail/gcloud/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcloud installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcloud, load one of these modules using a module load command like:

                  module load gcloud/382.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcloud/382.0.0 - x x - x x"}, {"location": "available_software/detail/gcsfs/", "title": "gcsfs", "text": ""}, {"location": "available_software/detail/gcsfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gcsfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gcsfs, load one of these modules using a module load command like:

                  module load gcsfs/2023.12.2.post1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gcsfs/2023.12.2.post1-foss-2023a x x x x x x"}, {"location": "available_software/detail/gdbm/", "title": "gdbm", "text": ""}, {"location": "available_software/detail/gdbm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdbm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdbm, load one of these modules using a module load command like:

                  module load gdbm/1.18.1-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdbm/1.18.1-foss-2020a - x x - x x"}, {"location": "available_software/detail/gdc-client/", "title": "gdc-client", "text": ""}, {"location": "available_software/detail/gdc-client/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gdc-client installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gdc-client, load one of these modules using a module load command like:

                  module load gdc-client/1.6.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gdc-client/1.6.0-GCCcore-10.2.0 x x x x - x"}, {"location": "available_software/detail/gengetopt/", "title": "gengetopt", "text": ""}, {"location": "available_software/detail/gengetopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gengetopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gengetopt, load one of these modules using a module load command like:

                  module load gengetopt/2.23-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gengetopt/2.23-GCCcore-10.2.0 - x x x x x gengetopt/2.23-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/genomepy/", "title": "genomepy", "text": ""}, {"location": "available_software/detail/genomepy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genomepy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genomepy, load one of these modules using a module load command like:

                  module load genomepy/0.15.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genomepy/0.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/genozip/", "title": "genozip", "text": ""}, {"location": "available_software/detail/genozip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which genozip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using genozip, load one of these modules using a module load command like:

                  module load genozip/13.0.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty genozip/13.0.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/gensim/", "title": "gensim", "text": ""}, {"location": "available_software/detail/gensim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gensim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gensim, load one of these modules using a module load command like:

                  module load gensim/4.2.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gensim/4.2.0-foss-2021a x x x - x x gensim/3.8.3-intel-2020b - x x - x x gensim/3.8.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/geopandas/", "title": "geopandas", "text": ""}, {"location": "available_software/detail/geopandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which geopandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using geopandas, load one of these modules using a module load command like:

                  module load geopandas/0.12.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty geopandas/0.12.2-foss-2022b x x x x x x geopandas/0.8.1-intel-2019b-Python-3.7.4 - - x - x x geopandas/0.8.1-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/gettext/", "title": "gettext", "text": ""}, {"location": "available_software/detail/gettext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gettext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gettext, load one of these modules using a module load command like:

                  module load gettext/0.22-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gettext/0.22-GCCcore-13.2.0 x x x x x x gettext/0.22 x x x x x x gettext/0.21.1-GCCcore-12.3.0 x x x x x x gettext/0.21.1-GCCcore-12.2.0 x x x x x x gettext/0.21.1 x x x x x x gettext/0.21-GCCcore-11.3.0 x x x x x x gettext/0.21-GCCcore-11.2.0 x x x x x x gettext/0.21-GCCcore-10.3.0 x x x x x x gettext/0.21-GCCcore-10.2.0 x x x x x x gettext/0.21 x x x x x x gettext/0.20.1-GCCcore-9.3.0 x x x x x x gettext/0.20.1-GCCcore-8.3.0 x x x - x x gettext/0.20.1 x x x x x x gettext/0.19.8.1-GCCcore-8.2.0 - x - - - - gettext/0.19.8.1 x x x x x x"}, {"location": "available_software/detail/gexiv2/", "title": "gexiv2", "text": ""}, {"location": "available_software/detail/gexiv2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gexiv2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gexiv2, load one of these modules using a module load command like:

                  module load gexiv2/0.12.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gexiv2/0.12.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/gfbf/", "title": "gfbf", "text": ""}, {"location": "available_software/detail/gfbf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gfbf, load one of these modules using a module load command like:

                  module load gfbf/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gfbf/2023b x x x x x x gfbf/2023a x x x x x x gfbf/2022b x x x x x x"}, {"location": "available_software/detail/gffread/", "title": "gffread", "text": ""}, {"location": "available_software/detail/gffread/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffread installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffread, load one of these modules using a module load command like:

                  module load gffread/0.12.7-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffread/0.12.7-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/gffutils/", "title": "gffutils", "text": ""}, {"location": "available_software/detail/gffutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gffutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gffutils, load one of these modules using a module load command like:

                  module load gffutils/0.12-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gffutils/0.12-foss-2022b x x x x x x"}, {"location": "available_software/detail/gflags/", "title": "gflags", "text": ""}, {"location": "available_software/detail/gflags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gflags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gflags, load one of these modules using a module load command like:

                  module load gflags/2.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gflags/2.2.2-GCCcore-12.2.0 x x x x x x gflags/2.2.2-GCCcore-11.3.0 x x x x x x gflags/2.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/giflib/", "title": "giflib", "text": ""}, {"location": "available_software/detail/giflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using giflib, load one of these modules using a module load command like:

                  module load giflib/5.2.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty giflib/5.2.1-GCCcore-12.3.0 x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x giflib/5.2.1-GCCcore-11.3.0 x x x x x x giflib/5.2.1-GCCcore-11.2.0 x x x x x x giflib/5.2.1-GCCcore-10.3.0 x x x x x x giflib/5.2.1-GCCcore-10.2.0 x x x x x x giflib/5.2.1-GCCcore-9.3.0 - x x - x x giflib/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/git-lfs/", "title": "git-lfs", "text": ""}, {"location": "available_software/detail/git-lfs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git-lfs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git-lfs, load one of these modules using a module load command like:

                  module load git-lfs/3.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git-lfs/3.2.0 x x x - x x"}, {"location": "available_software/detail/git/", "title": "git", "text": ""}, {"location": "available_software/detail/git/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using git, load one of these modules using a module load command like:

                  module load git/2.42.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty git/2.42.0-GCCcore-13.2.0 x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x git/2.36.0-GCCcore-11.3.0-nodocs x x x x x x git/2.33.1-GCCcore-11.2.0-nodocs x x x x x x git/2.32.0-GCCcore-10.3.0-nodocs x x x x x x git/2.28.0-GCCcore-10.2.0-nodocs x x x x x x git/2.23.0-GCCcore-9.3.0-nodocs x x x x x x git/2.23.0-GCCcore-8.3.0-nodocs - x x - x x git/2.23.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glew/", "title": "glew", "text": ""}, {"location": "available_software/detail/glew/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glew installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glew, load one of these modules using a module load command like:

                  module load glew/2.2.0-GCCcore-12.3.0-osmesa\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glew/2.2.0-GCCcore-12.3.0-osmesa x x x x x x glew/2.2.0-GCCcore-12.2.0-egl x x x x x x glew/2.2.0-GCCcore-11.2.0-osmesa x x x x x x glew/2.2.0-GCCcore-11.2.0-egl x x x x x x glew/2.1.0-GCCcore-10.2.0 x x x x x x glew/2.1.0-GCCcore-9.3.0 - x x - x x glew/2.1.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/glib-networking/", "title": "glib-networking", "text": ""}, {"location": "available_software/detail/glib-networking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glib-networking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glib-networking, load one of these modules using a module load command like:

                  module load glib-networking/2.72.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glib-networking/2.72.1-GCCcore-11.2.0 x x x x x x glib-networking/2.68.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/glibc/", "title": "glibc", "text": ""}, {"location": "available_software/detail/glibc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glibc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glibc, load one of these modules using a module load command like:

                  module load glibc/2.30-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glibc/2.30-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/glog/", "title": "glog", "text": ""}, {"location": "available_software/detail/glog/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which glog installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using glog, load one of these modules using a module load command like:

                  module load glog/0.6.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty glog/0.6.0-GCCcore-12.2.0 x x x x x x glog/0.6.0-GCCcore-11.3.0 x x x x x x glog/0.4.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmpy2/", "title": "gmpy2", "text": ""}, {"location": "available_software/detail/gmpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmpy2, load one of these modules using a module load command like:

                  module load gmpy2/2.1.5-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmpy2/2.1.5-GCC-12.3.0 x x x x x x gmpy2/2.1.5-GCC-12.2.0 x x x x x x gmpy2/2.1.2-intel-compilers-2022.1.0 x x x x x x gmpy2/2.1.2-intel-compilers-2021.4.0 x x x x x x gmpy2/2.1.2-GCC-11.3.0 x x x x x x gmpy2/2.1.2-GCC-11.2.0 x x x - x x gmpy2/2.1.0b5-GCC-10.2.0 - x x x x x gmpy2/2.1.0b5-GCC-9.3.0 - x x - x x gmpy2/2.1.0b4-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/gmsh/", "title": "gmsh", "text": ""}, {"location": "available_software/detail/gmsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gmsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gmsh, load one of these modules using a module load command like:

                  module load gmsh/4.5.6-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gmsh/4.5.6-intel-2019b-Python-2.7.16 - x x - x x gmsh/4.5.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/gnuplot/", "title": "gnuplot", "text": ""}, {"location": "available_software/detail/gnuplot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gnuplot, load one of these modules using a module load command like:

                  module load gnuplot/5.4.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x gnuplot/5.4.4-GCCcore-11.3.0 x x x x x x gnuplot/5.4.2-GCCcore-11.2.0 x x x x x x gnuplot/5.4.2-GCCcore-10.3.0 x x x x x x gnuplot/5.4.1-GCCcore-10.2.0 x x x x x x gnuplot/5.2.8-GCCcore-9.3.0 - x x - x x gnuplot/5.2.8-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/goalign/", "title": "goalign", "text": ""}, {"location": "available_software/detail/goalign/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which goalign installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using goalign, load one of these modules using a module load command like:

                  module load goalign/0.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty goalign/0.3.2 - - x - x -"}, {"location": "available_software/detail/gobff/", "title": "gobff", "text": ""}, {"location": "available_software/detail/gobff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gobff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gobff, load one of these modules using a module load command like:

                  module load gobff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gobff/2020b - x - - - -"}, {"location": "available_software/detail/gomkl/", "title": "gomkl", "text": ""}, {"location": "available_software/detail/gomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gomkl, load one of these modules using a module load command like:

                  module load gomkl/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gomkl/2023a x x x x x x gomkl/2021a x x x x x x gomkl/2020a - x x x x x"}, {"location": "available_software/detail/gompi/", "title": "gompi", "text": ""}, {"location": "available_software/detail/gompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompi, load one of these modules using a module load command like:

                  module load gompi/2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompi/2023b x x x x x x gompi/2023a x x x x x x gompi/2022b x x x x x x gompi/2022a x x x x x x gompi/2021b x x x x x x gompi/2021a x x x x x x gompi/2020b x x x x x x gompi/2020a - x x x x x gompi/2019b x x x x x x"}, {"location": "available_software/detail/gompic/", "title": "gompic", "text": ""}, {"location": "available_software/detail/gompic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gompic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gompic, load one of these modules using a module load command like:

                  module load gompic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gompic/2020b x x - - x x"}, {"location": "available_software/detail/googletest/", "title": "googletest", "text": ""}, {"location": "available_software/detail/googletest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using googletest, load one of these modules using a module load command like:

                  module load googletest/1.13.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty googletest/1.13.0-GCCcore-12.3.0 x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x googletest/1.11.0-GCCcore-11.3.0 x x x x x x googletest/1.11.0-GCCcore-11.2.0 x x x - x x googletest/1.10.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gotree/", "title": "gotree", "text": ""}, {"location": "available_software/detail/gotree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gotree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gotree, load one of these modules using a module load command like:

                  module load gotree/0.4.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gotree/0.4.0 - - x - x -"}, {"location": "available_software/detail/gperf/", "title": "gperf", "text": ""}, {"location": "available_software/detail/gperf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperf, load one of these modules using a module load command like:

                  module load gperf/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperf/3.1-GCCcore-12.3.0 x x x x x x gperf/3.1-GCCcore-12.2.0 x x x x x x gperf/3.1-GCCcore-11.3.0 x x x x x x gperf/3.1-GCCcore-11.2.0 x x x x x x gperf/3.1-GCCcore-10.3.0 x x x x x x gperf/3.1-GCCcore-10.2.0 x x x x x x gperf/3.1-GCCcore-9.3.0 x x x x x x gperf/3.1-GCCcore-8.3.0 x x x - x x gperf/3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/gperftools/", "title": "gperftools", "text": ""}, {"location": "available_software/detail/gperftools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gperftools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gperftools, load one of these modules using a module load command like:

                  module load gperftools/2.14-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gperftools/2.14-GCCcore-12.2.0 x x x x x x gperftools/2.10-GCCcore-11.3.0 x x x x x x gperftools/2.9.1-GCCcore-10.3.0 x x x - x x gperftools/2.7.90-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/gpustat/", "title": "gpustat", "text": ""}, {"location": "available_software/detail/gpustat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gpustat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gpustat, load one of these modules using a module load command like:

                  module load gpustat/0.6.0-gcccuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gpustat/0.6.0-gcccuda-2020b - - - - x -"}, {"location": "available_software/detail/graphite2/", "title": "graphite2", "text": ""}, {"location": "available_software/detail/graphite2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphite2, load one of these modules using a module load command like:

                  module load graphite2/1.3.14-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphite2/1.3.14-GCCcore-12.3.0 x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x graphite2/1.3.14-GCCcore-11.3.0 x x x x x x graphite2/1.3.14-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/graphviz-python/", "title": "graphviz-python", "text": ""}, {"location": "available_software/detail/graphviz-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which graphviz-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using graphviz-python, load one of these modules using a module load command like:

                  module load graphviz-python/0.20.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty graphviz-python/0.20.1-GCCcore-12.3.0 x x x x x x graphviz-python/0.20.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/grid/", "title": "grid", "text": ""}, {"location": "available_software/detail/grid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which grid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using grid, load one of these modules using a module load command like:

                  module load grid/20220610-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty grid/20220610-intel-2022a x x x x x x"}, {"location": "available_software/detail/groff/", "title": "groff", "text": ""}, {"location": "available_software/detail/groff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using groff, load one of these modules using a module load command like:

                  module load groff/1.22.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty groff/1.22.4-GCCcore-12.3.0 x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x groff/1.22.4-GCCcore-11.3.0 x x x x x x groff/1.22.4-GCCcore-11.2.0 x x x x x x groff/1.22.4-GCCcore-10.3.0 x x x x x x groff/1.22.4-GCCcore-10.2.0 x x x x x x groff/1.22.4-GCCcore-9.3.0 x x x x x x groff/1.22.4-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/gzip/", "title": "gzip", "text": ""}, {"location": "available_software/detail/gzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using gzip, load one of these modules using a module load command like:

                  module load gzip/1.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty gzip/1.13-GCCcore-13.2.0 x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x gzip/1.12-GCCcore-11.3.0 x x x x x x gzip/1.10-GCCcore-11.2.0 x x x x x x gzip/1.10-GCCcore-10.3.0 x x x x x x gzip/1.10-GCCcore-10.2.0 x x x x x x gzip/1.10-GCCcore-9.3.0 - x x x x x gzip/1.10-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/h5netcdf/", "title": "h5netcdf", "text": ""}, {"location": "available_software/detail/h5netcdf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5netcdf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5netcdf, load one of these modules using a module load command like:

                  module load h5netcdf/1.2.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5netcdf/1.2.0-foss-2023a x x x x x x"}, {"location": "available_software/detail/h5py/", "title": "h5py", "text": ""}, {"location": "available_software/detail/h5py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using h5py, load one of these modules using a module load command like:

                  module load h5py/3.9.0-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty h5py/3.9.0-foss-2023a x x x x x x h5py/3.8.0-foss-2022b x x x x x x h5py/3.7.0-intel-2022a x x x x x x h5py/3.7.0-foss-2022a x x x x x x h5py/3.6.0-intel-2021b x x x - x x h5py/3.6.0-foss-2021b x x x x x x h5py/3.2.1-gomkl-2021a x x x - x x h5py/3.2.1-foss-2021a x x x x x x h5py/3.1.0-intel-2020b - x x - x x h5py/3.1.0-fosscuda-2020b x - - - x - h5py/3.1.0-foss-2020b x x x x x x h5py/2.10.0-intel-2020a-Python-3.8.2 x x x x x x h5py/2.10.0-intel-2020a-Python-2.7.18 - x x - x x h5py/2.10.0-intel-2019b-Python-3.7.4 - x x - x x h5py/2.10.0-foss-2020a-Python-3.8.2 - x x - x x h5py/2.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/harmony/", "title": "harmony", "text": ""}, {"location": "available_software/detail/harmony/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which harmony installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using harmony, load one of these modules using a module load command like:

                  module load harmony/1.0.0-20200224-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty harmony/1.0.0-20200224-foss-2020a-R-4.0.0 - x x - x x harmony/0.1.0-20210528-foss-2020b-R-4.0.3 - x x - x x"}, {"location": "available_software/detail/hatchling/", "title": "hatchling", "text": ""}, {"location": "available_software/detail/hatchling/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hatchling, load one of these modules using a module load command like:

                  module load hatchling/1.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hatchling/1.18.0-GCCcore-13.2.0 x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/help2man/", "title": "help2man", "text": ""}, {"location": "available_software/detail/help2man/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which help2man installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using help2man, load one of these modules using a module load command like:

                  module load help2man/1.49.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty help2man/1.49.3-GCCcore-13.2.0 x x x x x x help2man/1.49.3-GCCcore-12.3.0 x x x x x x help2man/1.49.2-GCCcore-12.2.0 x x x x x x help2man/1.49.2-GCCcore-11.3.0 x x x x x x help2man/1.48.3-GCCcore-11.2.0 x x x x x x help2man/1.48.3-GCCcore-10.3.0 x x x x x x help2man/1.47.16-GCCcore-10.2.0 x x x x x x help2man/1.47.12-GCCcore-9.3.0 x x x x x x help2man/1.47.8-GCCcore-8.3.0 x x x x x x help2man/1.47.7-GCCcore-8.2.0 - x - - - - help2man/1.47.4 - x - - - -"}, {"location": "available_software/detail/hierfstat/", "title": "hierfstat", "text": ""}, {"location": "available_software/detail/hierfstat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hierfstat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hierfstat, load one of these modules using a module load command like:

                  module load hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hierfstat/0.5-7-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/hifiasm/", "title": "hifiasm", "text": ""}, {"location": "available_software/detail/hifiasm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hifiasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hifiasm, load one of these modules using a module load command like:

                  module load hifiasm/0.19.7-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hifiasm/0.19.7-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/hiredis/", "title": "hiredis", "text": ""}, {"location": "available_software/detail/hiredis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hiredis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hiredis, load one of these modules using a module load command like:

                  module load hiredis/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hiredis/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/histolab/", "title": "histolab", "text": ""}, {"location": "available_software/detail/histolab/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which histolab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using histolab, load one of these modules using a module load command like:

                  module load histolab/0.4.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty histolab/0.4.1-foss-2021b x x x - x x histolab/0.4.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/hmmlearn/", "title": "hmmlearn", "text": ""}, {"location": "available_software/detail/hmmlearn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hmmlearn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hmmlearn, load one of these modules using a module load command like:

                  module load hmmlearn/0.3.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hmmlearn/0.3.0-gfbf-2023a x x x x x x hmmlearn/0.3.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/horton/", "title": "horton", "text": ""}, {"location": "available_software/detail/horton/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which horton installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using horton, load one of these modules using a module load command like:

                  module load horton/2.1.1-intel-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty horton/2.1.1-intel-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/how_are_we_stranded_here/", "title": "how_are_we_stranded_here", "text": ""}, {"location": "available_software/detail/how_are_we_stranded_here/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which how_are_we_stranded_here installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using how_are_we_stranded_here, load one of these modules using a module load command like:

                  module load how_are_we_stranded_here/1.0.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty how_are_we_stranded_here/1.0.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/humann/", "title": "humann", "text": ""}, {"location": "available_software/detail/humann/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which humann installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using humann, load one of these modules using a module load command like:

                  module load humann/3.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty humann/3.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/hunspell/", "title": "hunspell", "text": ""}, {"location": "available_software/detail/hunspell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hunspell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hunspell, load one of these modules using a module load command like:

                  module load hunspell/1.7.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hunspell/1.7.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/hwloc/", "title": "hwloc", "text": ""}, {"location": "available_software/detail/hwloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hwloc, load one of these modules using a module load command like:

                  module load hwloc/2.9.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hwloc/2.9.2-GCCcore-13.2.0 x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x hwloc/2.7.1-GCCcore-11.3.0 x x x x x x hwloc/2.5.0-GCCcore-11.2.0 x x x x x x hwloc/2.4.1-GCCcore-10.3.0 x x x x x x hwloc/2.2.0-GCCcore-10.2.0 x x x x x x hwloc/2.2.0-GCCcore-9.3.0 x x x x x x hwloc/1.11.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/hyperopt/", "title": "hyperopt", "text": ""}, {"location": "available_software/detail/hyperopt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hyperopt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hyperopt, load one of these modules using a module load command like:

                  module load hyperopt/0.2.5-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hyperopt/0.2.5-fosscuda-2020b - - - - x - hyperopt/0.2.4-intel-2019b-Python-3.7.4-Java-1.8 - x x - x -"}, {"location": "available_software/detail/hypothesis/", "title": "hypothesis", "text": ""}, {"location": "available_software/detail/hypothesis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using hypothesis, load one of these modules using a module load command like:

                  module load hypothesis/6.90.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x hypothesis/6.46.7-GCCcore-11.3.0 x x x x x x hypothesis/6.14.6-GCCcore-11.2.0 x x x x x x hypothesis/6.13.1-GCCcore-10.3.0 x x x x x x hypothesis/5.41.5-GCCcore-10.2.0 x x x x x x hypothesis/5.41.2-GCCcore-10.2.0 x x x x x x hypothesis/4.57.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x hypothesis/4.44.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/iccifort/", "title": "iccifort", "text": ""}, {"location": "available_software/detail/iccifort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifort, load one of these modules using a module load command like:

                  module load iccifort/2020.4.304\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifort/2020.4.304 x x x x x x iccifort/2020.1.217 x x x x x x iccifort/2019.5.281 - x x - x x"}, {"location": "available_software/detail/iccifortcuda/", "title": "iccifortcuda", "text": ""}, {"location": "available_software/detail/iccifortcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iccifortcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iccifortcuda, load one of these modules using a module load command like:

                  module load iccifortcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iccifortcuda/2020b - - - - x - iccifortcuda/2020a - - - - x - iccifortcuda/2019b - - - - x -"}, {"location": "available_software/detail/ichorCNA/", "title": "ichorCNA", "text": ""}, {"location": "available_software/detail/ichorCNA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ichorCNA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ichorCNA, load one of these modules using a module load command like:

                  module load ichorCNA/0.3.2-20191219-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ichorCNA/0.3.2-20191219-foss-2020a - x x - x x"}, {"location": "available_software/detail/idemux/", "title": "idemux", "text": ""}, {"location": "available_software/detail/idemux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which idemux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using idemux, load one of these modules using a module load command like:

                  module load idemux/0.1.6-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty idemux/0.1.6-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/igraph/", "title": "igraph", "text": ""}, {"location": "available_software/detail/igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igraph, load one of these modules using a module load command like:

                  module load igraph/0.10.10-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igraph/0.10.10-foss-2023a x x x x x x igraph/0.10.3-foss-2022a x x x x x x igraph/0.9.5-foss-2021b x x x x x x igraph/0.9.4-foss-2021a x x x x x x igraph/0.9.1-fosscuda-2020b - - - - x - igraph/0.9.1-foss-2020b - x x x x x igraph/0.8.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/igvShiny/", "title": "igvShiny", "text": ""}, {"location": "available_software/detail/igvShiny/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which igvShiny installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using igvShiny, load one of these modules using a module load command like:

                  module load igvShiny/20240112-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty igvShiny/20240112-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/iibff/", "title": "iibff", "text": ""}, {"location": "available_software/detail/iibff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iibff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iibff, load one of these modules using a module load command like:

                  module load iibff/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iibff/2020b - x - - - -"}, {"location": "available_software/detail/iimpi/", "title": "iimpi", "text": ""}, {"location": "available_software/detail/iimpi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpi, load one of these modules using a module load command like:

                  module load iimpi/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpi/2023a x x x x x x iimpi/2022b x x x x x x iimpi/2022a x x x x x x iimpi/2021b x x x x x x iimpi/2021a - x x - x x iimpi/2020b x x x x x x iimpi/2020a x x x x x x iimpi/2019b - x x - x x"}, {"location": "available_software/detail/iimpic/", "title": "iimpic", "text": ""}, {"location": "available_software/detail/iimpic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iimpic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iimpic, load one of these modules using a module load command like:

                  module load iimpic/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iimpic/2020b - - - - x - iimpic/2020a - - - - x - iimpic/2019b - - - - x -"}, {"location": "available_software/detail/imagecodecs/", "title": "imagecodecs", "text": ""}, {"location": "available_software/detail/imagecodecs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imagecodecs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imagecodecs, load one of these modules using a module load command like:

                  module load imagecodecs/2022.9.26-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imagecodecs/2022.9.26-foss-2022a x x x x x x"}, {"location": "available_software/detail/imageio/", "title": "imageio", "text": ""}, {"location": "available_software/detail/imageio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imageio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imageio, load one of these modules using a module load command like:

                  module load imageio/2.22.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imageio/2.22.2-foss-2022a x x x x x x imageio/2.13.5-foss-2021b x x x x x x imageio/2.10.5-foss-2021a x x x - x x imageio/2.9.0-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/imbalanced-learn/", "title": "imbalanced-learn", "text": ""}, {"location": "available_software/detail/imbalanced-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imbalanced-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imbalanced-learn, load one of these modules using a module load command like:

                  module load imbalanced-learn/0.10.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imbalanced-learn/0.10.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/imgaug/", "title": "imgaug", "text": ""}, {"location": "available_software/detail/imgaug/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imgaug installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imgaug, load one of these modules using a module load command like:

                  module load imgaug/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imgaug/0.4.0-foss-2021b x x x - x x imgaug/0.4.0-foss-2021a-CUDA-11.3.1 x - - - x -"}, {"location": "available_software/detail/imkl-FFTW/", "title": "imkl-FFTW", "text": ""}, {"location": "available_software/detail/imkl-FFTW/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl-FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl-FFTW, load one of these modules using a module load command like:

                  module load imkl-FFTW/2023.1.0-iimpi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl-FFTW/2023.1.0-iimpi-2023a x x x x x x imkl-FFTW/2022.2.1-iimpi-2022b x x x x x x imkl-FFTW/2022.1.0-iimpi-2022a x x x x x x imkl-FFTW/2021.4.0-iimpi-2021b x x x x x x"}, {"location": "available_software/detail/imkl/", "title": "imkl", "text": ""}, {"location": "available_software/detail/imkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imkl, load one of these modules using a module load command like:

                  module load imkl/2023.1.0-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imkl/2023.1.0-gompi-2023a - - x - x x imkl/2023.1.0 x x x x x x imkl/2022.2.1 x x x x x x imkl/2022.1.0 x x x x x x imkl/2021.4.0 x x x x x x imkl/2021.2.0-iompi-2021a x x x x x x imkl/2021.2.0-iimpi-2021a - x x - x x imkl/2021.2.0-gompi-2021a x - x - x x imkl/2020.4.304-iompi-2020b x - x x x x imkl/2020.4.304-iimpic-2020b - - - - x - imkl/2020.4.304-iimpi-2020b - - x x x x imkl/2020.4.304-NVHPC-21.2 - - x - x - imkl/2020.1.217-iimpic-2020a - - - - x - imkl/2020.1.217-iimpi-2020a x - x - x x imkl/2020.1.217-gompi-2020a - - x - x x imkl/2020.0.166-iompi-2020a - x - - - - imkl/2020.0.166-iimpi-2020b x x - x - - imkl/2020.0.166-iimpi-2020a - x - - - - imkl/2020.0.166-gompi-2023a x x - x - - imkl/2020.0.166-gompi-2020a - x - - - - imkl/2019.5.281-iimpic-2019b - - - - x - imkl/2019.5.281-iimpi-2019b - x x - x x imkl/2018.4.274-iompi-2020b - x - x - - imkl/2018.4.274-iompi-2020a - x - - - - imkl/2018.4.274-iimpi-2020b - x - x - - imkl/2018.4.274-iimpi-2020a x x - x - - imkl/2018.4.274-iimpi-2019b - x - - - - imkl/2018.4.274-gompi-2021a - x - x - - imkl/2018.4.274-gompi-2020a - x - x - - imkl/2018.4.274-NVHPC-21.2 x - - - - -"}, {"location": "available_software/detail/impi/", "title": "impi", "text": ""}, {"location": "available_software/detail/impi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which impi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using impi, load one of these modules using a module load command like:

                  module load impi/2021.9.0-intel-compilers-2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty impi/2021.9.0-intel-compilers-2023.1.0 x x x x x x impi/2021.7.1-intel-compilers-2022.2.1 x x x x x x impi/2021.6.0-intel-compilers-2022.1.0 x x x x x x impi/2021.4.0-intel-compilers-2021.4.0 x x x x x x impi/2021.2.0-intel-compilers-2021.2.0 - x x - x x impi/2019.9.304-iccifortcuda-2020b - - - - x - impi/2019.9.304-iccifort-2020.4.304 x x x x x x impi/2019.9.304-iccifort-2020.1.217 x x x x x x impi/2019.9.304-iccifort-2019.5.281 - x x - x x impi/2019.7.217-iccifortcuda-2020a - - - - x - impi/2019.7.217-iccifort-2020.1.217 - x x - x x impi/2019.7.217-iccifort-2019.5.281 - x x - x -"}, {"location": "available_software/detail/imutils/", "title": "imutils", "text": ""}, {"location": "available_software/detail/imutils/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which imutils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using imutils, load one of these modules using a module load command like:

                  module load imutils/0.5.4-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty imutils/0.5.4-fosscuda-2020b x - - - x - imutils/0.5.4-foss-2022a-CUDA-11.7.0 x - x - x -"}, {"location": "available_software/detail/inferCNV/", "title": "inferCNV", "text": ""}, {"location": "available_software/detail/inferCNV/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inferCNV installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inferCNV, load one of these modules using a module load command like:

                  module load inferCNV/1.12.0-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inferCNV/1.12.0-foss-2022a-R-4.2.1 x x x x x x inferCNV/1.12.0-foss-2021b-R-4.2.0 x x x - x x inferCNV/1.3.3-foss-2020b x x x x x x"}, {"location": "available_software/detail/infercnvpy/", "title": "infercnvpy", "text": ""}, {"location": "available_software/detail/infercnvpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which infercnvpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using infercnvpy, load one of these modules using a module load command like:

                  module load infercnvpy/0.4.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty infercnvpy/0.4.2-foss-2022a x x x x x x infercnvpy/0.4.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/inflection/", "title": "inflection", "text": ""}, {"location": "available_software/detail/inflection/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which inflection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using inflection, load one of these modules using a module load command like:

                  module load inflection/1.3.5-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty inflection/1.3.5-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/intel-compilers/", "title": "intel-compilers", "text": ""}, {"location": "available_software/detail/intel-compilers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel-compilers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel-compilers, load one of these modules using a module load command like:

                  module load intel-compilers/2023.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel-compilers/2023.1.0 x x x x x x intel-compilers/2022.2.1 x x x x x x intel-compilers/2022.1.0 x x x x x x intel-compilers/2021.4.0 x x x x x x intel-compilers/2021.2.0 x x x x x x"}, {"location": "available_software/detail/intel/", "title": "intel", "text": ""}, {"location": "available_software/detail/intel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intel, load one of these modules using a module load command like:

                  module load intel/2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intel/2023a x x x x x x intel/2022b x x x x x x intel/2022a x x x x x x intel/2021b x x x x x x intel/2021a - x x - x x intel/2020b - x x x x x intel/2020a x x x x x x intel/2019b - x x - x x"}, {"location": "available_software/detail/intelcuda/", "title": "intelcuda", "text": ""}, {"location": "available_software/detail/intelcuda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intelcuda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intelcuda, load one of these modules using a module load command like:

                  module load intelcuda/2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intelcuda/2020b - - - - x - intelcuda/2020a - - - - x - intelcuda/2019b - - - - x -"}, {"location": "available_software/detail/intervaltree-python/", "title": "intervaltree-python", "text": ""}, {"location": "available_software/detail/intervaltree-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree-python, load one of these modules using a module load command like:

                  module load intervaltree-python/3.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree-python/3.1.0-GCCcore-11.3.0 x x x x x x intervaltree-python/3.1.0-GCCcore-11.2.0 x x x - x x intervaltree-python/3.1.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/intervaltree/", "title": "intervaltree", "text": ""}, {"location": "available_software/detail/intervaltree/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intervaltree installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intervaltree, load one of these modules using a module load command like:

                  module load intervaltree/0.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intervaltree/0.1-GCCcore-11.3.0 x x x x x x intervaltree/0.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/intltool/", "title": "intltool", "text": ""}, {"location": "available_software/detail/intltool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which intltool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using intltool, load one of these modules using a module load command like:

                  module load intltool/0.51.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty intltool/0.51.0-GCCcore-12.3.0 x x x x x x intltool/0.51.0-GCCcore-12.2.0 x x x x x x intltool/0.51.0-GCCcore-11.3.0 x x x x x x intltool/0.51.0-GCCcore-11.2.0 x x x x x x intltool/0.51.0-GCCcore-10.3.0 x x x x x x intltool/0.51.0-GCCcore-10.2.0 x x x x x x intltool/0.51.0-GCCcore-9.3.0 x x x x x x intltool/0.51.0-GCCcore-8.3.0 x x x - x x intltool/0.51.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/iodata/", "title": "iodata", "text": ""}, {"location": "available_software/detail/iodata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iodata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iodata, load one of these modules using a module load command like:

                  module load iodata/1.0.0a2-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iodata/1.0.0a2-intel-2022a x x x x x x"}, {"location": "available_software/detail/iomkl/", "title": "iomkl", "text": ""}, {"location": "available_software/detail/iomkl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iomkl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iomkl, load one of these modules using a module load command like:

                  module load iomkl/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iomkl/2021a x x x x x x iomkl/2020b x x x x x x iomkl/2020a - x - - - -"}, {"location": "available_software/detail/iompi/", "title": "iompi", "text": ""}, {"location": "available_software/detail/iompi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which iompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using iompi, load one of these modules using a module load command like:

                  module load iompi/2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty iompi/2021a x x x x x x iompi/2020b x x x x x x iompi/2020a - x - - - -"}, {"location": "available_software/detail/isoCirc/", "title": "isoCirc", "text": ""}, {"location": "available_software/detail/isoCirc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which isoCirc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using isoCirc, load one of these modules using a module load command like:

                  module load isoCirc/1.0.4-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty isoCirc/1.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/jax/", "title": "jax", "text": ""}, {"location": "available_software/detail/jax/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jax installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jax, load one of these modules using a module load command like:

                  module load jax/0.3.25-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jax/0.3.25-foss-2022a-CUDA-11.7.0 x - - - x - jax/0.3.25-foss-2022a x x x x x x jax/0.3.23-foss-2021b-CUDA-11.4.1 x - - - x - jax/0.3.9-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.3.9-foss-2021a x x x x x x jax/0.2.24-foss-2021a-CUDA-11.3.1 x - - - x - jax/0.2.24-foss-2021a - x x - x x jax/0.2.19-fosscuda-2020b x - - - x - jax/0.2.19-foss-2020b x x x x x x"}, {"location": "available_software/detail/jbigkit/", "title": "jbigkit", "text": ""}, {"location": "available_software/detail/jbigkit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jbigkit, load one of these modules using a module load command like:

                  module load jbigkit/2.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jbigkit/2.1-GCCcore-13.2.0 x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x jbigkit/2.1-GCCcore-11.3.0 x x x x x x jbigkit/2.1-GCCcore-11.2.0 x x x x x x jbigkit/2.1-GCCcore-10.3.0 x x x x x x jbigkit/2.1-GCCcore-10.2.0 x - x x x x jbigkit/2.1-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/jemalloc/", "title": "jemalloc", "text": ""}, {"location": "available_software/detail/jemalloc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jemalloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jemalloc, load one of these modules using a module load command like:

                  module load jemalloc/5.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jemalloc/5.3.0-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.3.0 x x x x x x jemalloc/5.2.1-GCCcore-11.2.0 x x x x x x jemalloc/5.2.1-GCCcore-10.3.0 x x x - x x jemalloc/5.2.1-GCCcore-10.2.0 - x x x x x jemalloc/5.2.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/jobcli/", "title": "jobcli", "text": ""}, {"location": "available_software/detail/jobcli/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jobcli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jobcli, load one of these modules using a module load command like:

                  module load jobcli/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jobcli/0.0 - x - - - -"}, {"location": "available_software/detail/joypy/", "title": "joypy", "text": ""}, {"location": "available_software/detail/joypy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which joypy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using joypy, load one of these modules using a module load command like:

                  module load joypy/0.2.4-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty joypy/0.2.4-intel-2020b - x x - x x joypy/0.2.2-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/json-c/", "title": "json-c", "text": ""}, {"location": "available_software/detail/json-c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using json-c, load one of these modules using a module load command like:

                  module load json-c/0.16-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty json-c/0.16-GCCcore-12.3.0 x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x json-c/0.15-GCCcore-10.3.0 - x x - x x json-c/0.15-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/jupyter-contrib-nbextensions/", "title": "jupyter-contrib-nbextensions", "text": ""}, {"location": "available_software/detail/jupyter-contrib-nbextensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-contrib-nbextensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-contrib-nbextensions, load one of these modules using a module load command like:

                  module load jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-contrib-nbextensions/0.7.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server-proxy/", "title": "jupyter-server-proxy", "text": ""}, {"location": "available_software/detail/jupyter-server-proxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server-proxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server-proxy, load one of these modules using a module load command like:

                  module load jupyter-server-proxy/3.2.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server-proxy/3.2.2-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/jupyter-server/", "title": "jupyter-server", "text": ""}, {"location": "available_software/detail/jupyter-server/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jupyter-server, load one of these modules using a module load command like:

                  module load jupyter-server/2.7.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x jupyter-server/2.7.0-GCCcore-12.2.0 x x x x x x jupyter-server/1.21.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/jxrlib/", "title": "jxrlib", "text": ""}, {"location": "available_software/detail/jxrlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which jxrlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using jxrlib, load one of these modules using a module load command like:

                  module load jxrlib/1.1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty jxrlib/1.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/kallisto/", "title": "kallisto", "text": ""}, {"location": "available_software/detail/kallisto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kallisto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kallisto, load one of these modules using a module load command like:

                  module load kallisto/0.48.0-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kallisto/0.48.0-gompi-2022a x x x x x x kallisto/0.46.1-intel-2020a - x - - - - kallisto/0.46.1-iimpi-2020b - x x x x x kallisto/0.46.1-iimpi-2020a - x x - x x kallisto/0.46.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/kb-python/", "title": "kb-python", "text": ""}, {"location": "available_software/detail/kb-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kb-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kb-python, load one of these modules using a module load command like:

                  module load kb-python/0.27.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kb-python/0.27.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/kim-api/", "title": "kim-api", "text": ""}, {"location": "available_software/detail/kim-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kim-api, load one of these modules using a module load command like:

                  module load kim-api/2.3.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kim-api/2.3.0-GCCcore-11.2.0 x x x - x x kim-api/2.2.1-GCCcore-10.3.0 - x x - x x kim-api/2.1.3-intel-2020a - x x - x x kim-api/2.1.3-intel-2019b - x x - x x kim-api/2.1.3-foss-2019b - x x - x x"}, {"location": "available_software/detail/kineto/", "title": "kineto", "text": ""}, {"location": "available_software/detail/kineto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kineto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kineto, load one of these modules using a module load command like:

                  module load kineto/0.4.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kineto/0.4.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/kma/", "title": "kma", "text": ""}, {"location": "available_software/detail/kma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kma, load one of these modules using a module load command like:

                  module load kma/1.2.22-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kma/1.2.22-intel-2019b - x x - x x"}, {"location": "available_software/detail/kneaddata/", "title": "kneaddata", "text": ""}, {"location": "available_software/detail/kneaddata/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which kneaddata installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using kneaddata, load one of these modules using a module load command like:

                  module load kneaddata/0.12.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty kneaddata/0.12.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/krbalancing/", "title": "krbalancing", "text": ""}, {"location": "available_software/detail/krbalancing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which krbalancing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using krbalancing, load one of these modules using a module load command like:

                  module load krbalancing/0.5.0b0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty krbalancing/0.5.0b0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/lancet/", "title": "lancet", "text": ""}, {"location": "available_software/detail/lancet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lancet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lancet, load one of these modules using a module load command like:

                  module load lancet/1.1.0-iccifort-2019.5.281\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lancet/1.1.0-iccifort-2019.5.281 - x - - - -"}, {"location": "available_software/detail/lavaan/", "title": "lavaan", "text": ""}, {"location": "available_software/detail/lavaan/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lavaan installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lavaan, load one of these modules using a module load command like:

                  module load lavaan/0.6-9-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lavaan/0.6-9-foss-2021a-R-4.1.0 - x x - x x"}, {"location": "available_software/detail/leafcutter/", "title": "leafcutter", "text": ""}, {"location": "available_software/detail/leafcutter/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leafcutter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leafcutter, load one of these modules using a module load command like:

                  module load leafcutter/0.2.9-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leafcutter/0.2.9-foss-2022b-R-4.2.2 x x x x x x"}, {"location": "available_software/detail/legacy-job-wrappers/", "title": "legacy-job-wrappers", "text": ""}, {"location": "available_software/detail/legacy-job-wrappers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which legacy-job-wrappers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using legacy-job-wrappers, load one of these modules using a module load command like:

                  module load legacy-job-wrappers/0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty legacy-job-wrappers/0.0 - x x - x -"}, {"location": "available_software/detail/leidenalg/", "title": "leidenalg", "text": ""}, {"location": "available_software/detail/leidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which leidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using leidenalg, load one of these modules using a module load command like:

                  module load leidenalg/0.10.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty leidenalg/0.10.2-foss-2023a x x x x x x leidenalg/0.9.1-foss-2022a x x x x x x leidenalg/0.8.8-foss-2021b x x x x x x leidenalg/0.8.7-foss-2021a x x x x x x leidenalg/0.8.3-fosscuda-2020b - - - - x - leidenalg/0.8.3-foss-2020b - x x x x x leidenalg/0.8.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/lftp/", "title": "lftp", "text": ""}, {"location": "available_software/detail/lftp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lftp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lftp, load one of these modules using a module load command like:

                  module load lftp/4.9.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lftp/4.9.2-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/libBigWig/", "title": "libBigWig", "text": ""}, {"location": "available_software/detail/libBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libBigWig, load one of these modules using a module load command like:

                  module load libBigWig/0.4.4-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libBigWig/0.4.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libFLAME/", "title": "libFLAME", "text": ""}, {"location": "available_software/detail/libFLAME/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libFLAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libFLAME, load one of these modules using a module load command like:

                  module load libFLAME/5.2.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libFLAME/5.2.0-GCCcore-10.2.0 - x - - - -"}, {"location": "available_software/detail/libGLU/", "title": "libGLU", "text": ""}, {"location": "available_software/detail/libGLU/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libGLU, load one of these modules using a module load command like:

                  module load libGLU/9.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libGLU/9.0.3-GCCcore-12.3.0 x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x libGLU/9.0.2-GCCcore-11.3.0 x x x x x x libGLU/9.0.2-GCCcore-11.2.0 x x x x x x libGLU/9.0.1-GCCcore-10.3.0 x x x x x x libGLU/9.0.1-GCCcore-10.2.0 x x x x x x libGLU/9.0.1-GCCcore-9.3.0 - x x - x x libGLU/9.0.1-GCCcore-8.3.0 x x x - x x libGLU/9.0.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libRmath/", "title": "libRmath", "text": ""}, {"location": "available_software/detail/libRmath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libRmath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libRmath, load one of these modules using a module load command like:

                  module load libRmath/4.1.0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libRmath/4.1.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libaec/", "title": "libaec", "text": ""}, {"location": "available_software/detail/libaec/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaec, load one of these modules using a module load command like:

                  module load libaec/1.0.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaec/1.0.6-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libaio/", "title": "libaio", "text": ""}, {"location": "available_software/detail/libaio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libaio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libaio, load one of these modules using a module load command like:

                  module load libaio/0.3.113-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libaio/0.3.113-GCCcore-12.3.0 x x x x x x libaio/0.3.112-GCCcore-11.3.0 x x x x x x libaio/0.3.112-GCCcore-11.2.0 x x x x x x libaio/0.3.112-GCCcore-10.3.0 x x x - x x libaio/0.3.112-GCCcore-10.2.0 - x x x x x libaio/0.3.111-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libarchive/", "title": "libarchive", "text": ""}, {"location": "available_software/detail/libarchive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libarchive, load one of these modules using a module load command like:

                  module load libarchive/3.7.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libarchive/3.7.2-GCCcore-13.2.0 x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x libarchive/3.6.1-GCCcore-11.3.0 x x x x x x libarchive/3.5.1-GCCcore-11.2.0 x x x x x x libarchive/3.5.1-GCCcore-10.3.0 x x x x x x libarchive/3.5.1-GCCcore-8.3.0 x - - - x - libarchive/3.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libavif/", "title": "libavif", "text": ""}, {"location": "available_software/detail/libavif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libavif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libavif, load one of these modules using a module load command like:

                  module load libavif/0.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libavif/0.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libcdms/", "title": "libcdms", "text": ""}, {"location": "available_software/detail/libcdms/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcdms installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcdms, load one of these modules using a module load command like:

                  module load libcdms/3.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcdms/3.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/libcerf/", "title": "libcerf", "text": ""}, {"location": "available_software/detail/libcerf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcerf, load one of these modules using a module load command like:

                  module load libcerf/2.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcerf/2.3-GCCcore-12.3.0 x x x x x x libcerf/2.1-GCCcore-11.3.0 x x x x x x libcerf/1.17-GCCcore-11.2.0 x x x x x x libcerf/1.17-GCCcore-10.3.0 x x x x x x libcerf/1.14-GCCcore-10.2.0 x x x x x x libcerf/1.13-GCCcore-9.3.0 - x x - x x libcerf/1.13-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libcint/", "title": "libcint", "text": ""}, {"location": "available_software/detail/libcint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libcint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libcint, load one of these modules using a module load command like:

                  module load libcint/5.5.0-gfbf-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libcint/5.5.0-gfbf-2022b x x x x x x libcint/5.1.6-foss-2022a - x x x x x libcint/4.4.0-gomkl-2021a x x x - x x libcint/4.4.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/libdap/", "title": "libdap", "text": ""}, {"location": "available_software/detail/libdap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdap, load one of these modules using a module load command like:

                  module load libdap/3.20.7-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdap/3.20.7-GCCcore-10.3.0 - x x - x x libdap/3.20.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libde265/", "title": "libde265", "text": ""}, {"location": "available_software/detail/libde265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libde265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libde265, load one of these modules using a module load command like:

                  module load libde265/1.0.11-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libde265/1.0.11-GCC-11.3.0 x x x x x x libde265/1.0.8-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libdeflate/", "title": "libdeflate", "text": ""}, {"location": "available_software/detail/libdeflate/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdeflate, load one of these modules using a module load command like:

                  module load libdeflate/1.19-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdeflate/1.19-GCCcore-13.2.0 x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x libdeflate/1.10-GCCcore-11.3.0 x x x x x x libdeflate/1.8-GCCcore-11.2.0 x x x x x x libdeflate/1.7-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/libdrm/", "title": "libdrm", "text": ""}, {"location": "available_software/detail/libdrm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrm, load one of these modules using a module load command like:

                  module load libdrm/2.4.115-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrm/2.4.115-GCCcore-12.3.0 x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x libdrm/2.4.110-GCCcore-11.3.0 x x x x x x libdrm/2.4.107-GCCcore-11.2.0 x x x x x x libdrm/2.4.106-GCCcore-10.3.0 x x x x x x libdrm/2.4.102-GCCcore-10.2.0 x x x x x x libdrm/2.4.100-GCCcore-9.3.0 - x x - x x libdrm/2.4.99-GCCcore-8.3.0 x x x - x x libdrm/2.4.97-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libdrs/", "title": "libdrs", "text": ""}, {"location": "available_software/detail/libdrs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libdrs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libdrs, load one of these modules using a module load command like:

                  module load libdrs/3.1.2-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libdrs/3.1.2-foss-2020a - x x - x x"}, {"location": "available_software/detail/libepoxy/", "title": "libepoxy", "text": ""}, {"location": "available_software/detail/libepoxy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libepoxy, load one of these modules using a module load command like:

                  module load libepoxy/1.5.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x libepoxy/1.5.10-GCCcore-11.3.0 x x x x x x libepoxy/1.5.8-GCCcore-11.2.0 x x x x x x libepoxy/1.5.8-GCCcore-10.3.0 x x x - x x libepoxy/1.5.4-GCCcore-10.2.0 x x x x x x libepoxy/1.5.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libev/", "title": "libev", "text": ""}, {"location": "available_software/detail/libev/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libev installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libev, load one of these modules using a module load command like:

                  module load libev/4.33-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libev/4.33-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libevent/", "title": "libevent", "text": ""}, {"location": "available_software/detail/libevent/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libevent, load one of these modules using a module load command like:

                  module load libevent/2.1.12-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libevent/2.1.12-GCCcore-13.2.0 x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x libevent/2.1.12-GCCcore-11.3.0 x x x x x x libevent/2.1.12-GCCcore-11.2.0 x x x x x x libevent/2.1.12-GCCcore-10.3.0 x x x x x x libevent/2.1.12-GCCcore-10.2.0 x x x x x x libevent/2.1.12 - x x - x x libevent/2.1.11-GCCcore-9.3.0 x x x x x x libevent/2.1.11-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libfabric/", "title": "libfabric", "text": ""}, {"location": "available_software/detail/libfabric/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libfabric, load one of these modules using a module load command like:

                  module load libfabric/1.19.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libfabric/1.19.0-GCCcore-13.2.0 x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x libfabric/1.15.1-GCCcore-11.3.0 x x x x x x libfabric/1.13.2-GCCcore-11.2.0 x x x x x x libfabric/1.12.1-GCCcore-10.3.0 x x x x x x libfabric/1.11.0-GCCcore-10.2.0 x x x x x x libfabric/1.11.0-GCCcore-9.3.0 - x x x x x"}, {"location": "available_software/detail/libffi/", "title": "libffi", "text": ""}, {"location": "available_software/detail/libffi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libffi, load one of these modules using a module load command like:

                  module load libffi/3.4.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libffi/3.4.4-GCCcore-13.2.0 x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x libffi/3.4.2-GCCcore-11.3.0 x x x x x x libffi/3.4.2-GCCcore-11.2.0 x x x x x x libffi/3.3-GCCcore-10.3.0 x x x x x x libffi/3.3-GCCcore-10.2.0 x x x x x x libffi/3.3-GCCcore-9.3.0 x x x x x x libffi/3.2.1-GCCcore-8.3.0 x x x x x x libffi/3.2.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgcrypt/", "title": "libgcrypt", "text": ""}, {"location": "available_software/detail/libgcrypt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgcrypt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgcrypt, load one of these modules using a module load command like:

                  module load libgcrypt/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgcrypt/1.9.3-GCCcore-11.2.0 x x x x x x libgcrypt/1.9.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgd/", "title": "libgd", "text": ""}, {"location": "available_software/detail/libgd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgd, load one of these modules using a module load command like:

                  module load libgd/2.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgd/2.3.3-GCCcore-12.3.0 x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x libgd/2.3.3-GCCcore-11.3.0 x x x x x x libgd/2.3.3-GCCcore-11.2.0 x x x x x x libgd/2.3.1-GCCcore-10.3.0 x x x x x x libgd/2.3.0-GCCcore-10.2.0 x x x x x x libgd/2.3.0-GCCcore-9.3.0 - x x - x x libgd/2.2.5-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libgeotiff/", "title": "libgeotiff", "text": ""}, {"location": "available_software/detail/libgeotiff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgeotiff, load one of these modules using a module load command like:

                  module load libgeotiff/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x libgeotiff/1.7.1-GCCcore-11.3.0 x x x x x x libgeotiff/1.7.0-GCCcore-11.2.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.3.0 x x x x x x libgeotiff/1.6.0-GCCcore-10.2.0 - x x x x x libgeotiff/1.5.1-GCCcore-9.3.0 - x x - x x libgeotiff/1.5.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libgit2/", "title": "libgit2", "text": ""}, {"location": "available_software/detail/libgit2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgit2, load one of these modules using a module load command like:

                  module load libgit2/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgit2/1.7.1-GCCcore-12.3.0 x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x libgit2/1.4.3-GCCcore-11.3.0 x x x x x x libgit2/1.1.1-GCCcore-11.2.0 x x x x x x libgit2/1.1.0-GCCcore-10.3.0 x x x x x x"}, {"location": "available_software/detail/libglvnd/", "title": "libglvnd", "text": ""}, {"location": "available_software/detail/libglvnd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libglvnd, load one of these modules using a module load command like:

                  module load libglvnd/1.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x libglvnd/1.4.0-GCCcore-11.3.0 x x x x x x libglvnd/1.3.3-GCCcore-11.2.0 x x x x x x libglvnd/1.3.3-GCCcore-10.3.0 x x x x x x libglvnd/1.3.2-GCCcore-10.2.0 x x x x x x libglvnd/1.2.0-GCCcore-9.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.3.0 - x x - x x libglvnd/1.2.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libgpg-error/", "title": "libgpg-error", "text": ""}, {"location": "available_software/detail/libgpg-error/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpg-error installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpg-error, load one of these modules using a module load command like:

                  module load libgpg-error/1.42-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpg-error/1.42-GCCcore-11.2.0 x x x x x x libgpg-error/1.42-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libgpuarray/", "title": "libgpuarray", "text": ""}, {"location": "available_software/detail/libgpuarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libgpuarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libgpuarray, load one of these modules using a module load command like:

                  module load libgpuarray/0.7.6-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libgpuarray/0.7.6-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/libharu/", "title": "libharu", "text": ""}, {"location": "available_software/detail/libharu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libharu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libharu, load one of these modules using a module load command like:

                  module load libharu/2.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libharu/2.3.0-foss-2021b x x x - x x libharu/2.3.0-GCCcore-10.3.0 - x x - x x libharu/2.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libheif/", "title": "libheif", "text": ""}, {"location": "available_software/detail/libheif/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libheif installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libheif, load one of these modules using a module load command like:

                  module load libheif/1.16.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libheif/1.16.2-GCC-11.3.0 x x x x x x libheif/1.12.0-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/libiconv/", "title": "libiconv", "text": ""}, {"location": "available_software/detail/libiconv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libiconv, load one of these modules using a module load command like:

                  module load libiconv/1.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libiconv/1.17-GCCcore-13.2.0 x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x libiconv/1.17-GCCcore-11.3.0 x x x x x x libiconv/1.16-GCCcore-11.2.0 x x x x x x libiconv/1.16-GCCcore-10.3.0 x x x x x x libiconv/1.16-GCCcore-10.2.0 x x x x x x libiconv/1.16-GCCcore-9.3.0 x x x x x x libiconv/1.16-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/libidn/", "title": "libidn", "text": ""}, {"location": "available_software/detail/libidn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn, load one of these modules using a module load command like:

                  module load libidn/1.38-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn/1.38-GCCcore-11.2.0 x x x x x x libidn/1.36-GCCcore-10.3.0 - x x - x x libidn/1.35-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/libidn2/", "title": "libidn2", "text": ""}, {"location": "available_software/detail/libidn2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libidn2, load one of these modules using a module load command like:

                  module load libidn2/2.3.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libidn2/2.3.2-GCCcore-11.2.0 x x x x x x libidn2/2.3.0-GCCcore-10.3.0 - x x x x x libidn2/2.3.0-GCCcore-10.2.0 x x x x x x libidn2/2.3.0-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/libjpeg-turbo/", "title": "libjpeg-turbo", "text": ""}, {"location": "available_software/detail/libjpeg-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjpeg-turbo, load one of these modules using a module load command like:

                  module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x libjpeg-turbo/2.1.3-GCCcore-11.3.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-11.2.0 x x x x x x libjpeg-turbo/2.0.6-GCCcore-10.3.0 x x x x x x libjpeg-turbo/2.0.5-GCCcore-10.2.0 x x x x x x libjpeg-turbo/2.0.4-GCCcore-9.3.0 - x x - x x libjpeg-turbo/2.0.3-GCCcore-8.3.0 x x x - x x libjpeg-turbo/2.0.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libjxl/", "title": "libjxl", "text": ""}, {"location": "available_software/detail/libjxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libjxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libjxl, load one of these modules using a module load command like:

                  module load libjxl/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libjxl/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/libleidenalg/", "title": "libleidenalg", "text": ""}, {"location": "available_software/detail/libleidenalg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libleidenalg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libleidenalg, load one of these modules using a module load command like:

                  module load libleidenalg/0.11.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libleidenalg/0.11.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/libmad/", "title": "libmad", "text": ""}, {"location": "available_software/detail/libmad/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmad installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmad, load one of these modules using a module load command like:

                  module load libmad/0.15.1b-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmad/0.15.1b-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmatheval/", "title": "libmatheval", "text": ""}, {"location": "available_software/detail/libmatheval/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmatheval installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmatheval, load one of these modules using a module load command like:

                  module load libmatheval/1.1.11-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmatheval/1.1.11-GCCcore-9.3.0 - x x - x x libmatheval/1.1.11-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libmaus2/", "title": "libmaus2", "text": ""}, {"location": "available_software/detail/libmaus2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmaus2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmaus2, load one of these modules using a module load command like:

                  module load libmaus2/2.0.813-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmaus2/2.0.813-GCC-12.3.0 x x x x x x libmaus2/2.0.499-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/libmypaint/", "title": "libmypaint", "text": ""}, {"location": "available_software/detail/libmypaint/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libmypaint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libmypaint, load one of these modules using a module load command like:

                  module load libmypaint/1.6.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libmypaint/1.6.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/libobjcryst/", "title": "libobjcryst", "text": ""}, {"location": "available_software/detail/libobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libobjcryst, load one of these modules using a module load command like:

                  module load libobjcryst/2021.1.2-intel-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libobjcryst/2021.1.2-intel-2020a - - - - - x libobjcryst/2021.1.2-foss-2021b x x x - x x libobjcryst/2017.2.3-intel-2020a - x x - x x"}, {"location": "available_software/detail/libogg/", "title": "libogg", "text": ""}, {"location": "available_software/detail/libogg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libogg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libogg, load one of these modules using a module load command like:

                  module load libogg/1.3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libogg/1.3.5-GCCcore-12.3.0 x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x libogg/1.3.5-GCCcore-11.3.0 x x x x x x libogg/1.3.5-GCCcore-11.2.0 x x x x x x libogg/1.3.4-GCCcore-10.3.0 x x x x x x libogg/1.3.4-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libopus/", "title": "libopus", "text": ""}, {"location": "available_software/detail/libopus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libopus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libopus, load one of these modules using a module load command like:

                  module load libopus/1.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libopus/1.4-GCCcore-12.3.0 x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x libopus/1.3.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/libpciaccess/", "title": "libpciaccess", "text": ""}, {"location": "available_software/detail/libpciaccess/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpciaccess, load one of these modules using a module load command like:

                  module load libpciaccess/0.17-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpciaccess/0.17-GCCcore-13.2.0 x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x libpciaccess/0.16-GCCcore-11.3.0 x x x x x x libpciaccess/0.16-GCCcore-11.2.0 x x x x x x libpciaccess/0.16-GCCcore-10.3.0 x x x x x x libpciaccess/0.16-GCCcore-10.2.0 x x x x x x libpciaccess/0.16-GCCcore-9.3.0 x x x x x x libpciaccess/0.14-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/libpng/", "title": "libpng", "text": ""}, {"location": "available_software/detail/libpng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpng, load one of these modules using a module load command like:

                  module load libpng/1.6.40-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpng/1.6.40-GCCcore-13.2.0 x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x libpng/1.6.37-GCCcore-11.3.0 x x x x x x libpng/1.6.37-GCCcore-11.2.0 x x x x x x libpng/1.6.37-GCCcore-10.3.0 x x x x x x libpng/1.6.37-GCCcore-10.2.0 x x x x x x libpng/1.6.37-GCCcore-9.3.0 x x x x x x libpng/1.6.37-GCCcore-8.3.0 x x x - x x libpng/1.6.36-GCCcore-8.2.0 - x - - - - libpng/1.2.58 - x x x x x"}, {"location": "available_software/detail/libpsl/", "title": "libpsl", "text": ""}, {"location": "available_software/detail/libpsl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libpsl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libpsl, load one of these modules using a module load command like:

                  module load libpsl/0.21.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libpsl/0.21.1-GCCcore-11.2.0 x x x x x x libpsl/0.21.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libreadline/", "title": "libreadline", "text": ""}, {"location": "available_software/detail/libreadline/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libreadline installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libreadline, load one of these modules using a module load command like:

                  module load libreadline/8.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libreadline/8.2-GCCcore-13.2.0 x x x x x x libreadline/8.2-GCCcore-12.3.0 x x x x x x libreadline/8.2-GCCcore-12.2.0 x x x x x x libreadline/8.1.2-GCCcore-11.3.0 x x x x x x libreadline/8.1-GCCcore-11.2.0 x x x x x x libreadline/8.1-GCCcore-10.3.0 x x x x x x libreadline/8.0-GCCcore-10.2.0 x x x x x x libreadline/8.0-GCCcore-9.3.0 x x x x x x libreadline/8.0-GCCcore-8.3.0 x x x x x x libreadline/8.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/librosa/", "title": "librosa", "text": ""}, {"location": "available_software/detail/librosa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librosa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librosa, load one of these modules using a module load command like:

                  module load librosa/0.7.2-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librosa/0.7.2-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/librsvg/", "title": "librsvg", "text": ""}, {"location": "available_software/detail/librsvg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librsvg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librsvg, load one of these modules using a module load command like:

                  module load librsvg/2.51.2-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librsvg/2.51.2-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/librttopo/", "title": "librttopo", "text": ""}, {"location": "available_software/detail/librttopo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which librttopo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using librttopo, load one of these modules using a module load command like:

                  module load librttopo/1.1.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty librttopo/1.1.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libsigc%2B%2B/", "title": "libsigc++", "text": ""}, {"location": "available_software/detail/libsigc%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsigc++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsigc++, load one of these modules using a module load command like:

                  module load libsigc++/2.10.8-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsigc++/2.10.8-GCCcore-10.3.0 - x x - x x libsigc++/2.10.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsndfile/", "title": "libsndfile", "text": ""}, {"location": "available_software/detail/libsndfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsndfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsndfile, load one of these modules using a module load command like:

                  module load libsndfile/1.2.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x libsndfile/1.1.0-GCCcore-11.3.0 x x x x x x libsndfile/1.0.31-GCCcore-11.2.0 x x x x x x libsndfile/1.0.31-GCCcore-10.3.0 x x x x x x libsndfile/1.0.28-GCCcore-10.2.0 x x x x x x libsndfile/1.0.28-GCCcore-9.3.0 - x x - x x libsndfile/1.0.28-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libsodium/", "title": "libsodium", "text": ""}, {"location": "available_software/detail/libsodium/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libsodium, load one of these modules using a module load command like:

                  module load libsodium/1.0.18-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libsodium/1.0.18-GCCcore-12.3.0 x x x x x x libsodium/1.0.18-GCCcore-12.2.0 x x x x x x libsodium/1.0.18-GCCcore-11.3.0 x x x x x x libsodium/1.0.18-GCCcore-11.2.0 x x x x x x libsodium/1.0.18-GCCcore-10.3.0 x x x x x x libsodium/1.0.18-GCCcore-10.2.0 x x x x x x libsodium/1.0.18-GCCcore-9.3.0 x x x x x x libsodium/1.0.18-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libspatialindex/", "title": "libspatialindex", "text": ""}, {"location": "available_software/detail/libspatialindex/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialindex installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialindex, load one of these modules using a module load command like:

                  module load libspatialindex/1.9.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialindex/1.9.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libspatialite/", "title": "libspatialite", "text": ""}, {"location": "available_software/detail/libspatialite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libspatialite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libspatialite, load one of these modules using a module load command like:

                  module load libspatialite/5.0.1-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libspatialite/5.0.1-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/libtasn1/", "title": "libtasn1", "text": ""}, {"location": "available_software/detail/libtasn1/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtasn1 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtasn1, load one of these modules using a module load command like:

                  module load libtasn1/4.18.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtasn1/4.18.0-GCCcore-11.2.0 x x x x x x libtasn1/4.17.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/libtirpc/", "title": "libtirpc", "text": ""}, {"location": "available_software/detail/libtirpc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtirpc, load one of these modules using a module load command like:

                  module load libtirpc/1.3.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x libtirpc/1.3.2-GCCcore-11.3.0 x x x x x x libtirpc/1.3.2-GCCcore-11.2.0 x x x x x x libtirpc/1.3.2-GCCcore-10.3.0 x x x x x x libtirpc/1.3.1-GCCcore-10.2.0 - x x x x x libtirpc/1.2.6-GCCcore-9.3.0 - - x - x x libtirpc/1.2.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libtool/", "title": "libtool", "text": ""}, {"location": "available_software/detail/libtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libtool, load one of these modules using a module load command like:

                  module load libtool/2.4.7-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libtool/2.4.7-GCCcore-13.2.0 x x x x x x libtool/2.4.7-GCCcore-12.3.0 x x x x x x libtool/2.4.7-GCCcore-12.2.0 x x x x x x libtool/2.4.7-GCCcore-11.3.0 x x x x x x libtool/2.4.7 x x x x x x libtool/2.4.6-GCCcore-11.2.0 x x x x x x libtool/2.4.6-GCCcore-10.3.0 x x x x x x libtool/2.4.6-GCCcore-10.2.0 x x x x x x libtool/2.4.6-GCCcore-9.3.0 x x x x x x libtool/2.4.6-GCCcore-8.3.0 x x x x x x libtool/2.4.6-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libunistring/", "title": "libunistring", "text": ""}, {"location": "available_software/detail/libunistring/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunistring installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunistring, load one of these modules using a module load command like:

                  module load libunistring/1.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunistring/1.0-GCCcore-11.2.0 x x x x x x libunistring/0.9.10-GCCcore-10.3.0 x x x - x x libunistring/0.9.10-GCCcore-9.3.0 - x x - x x libunistring/0.9.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libunwind/", "title": "libunwind", "text": ""}, {"location": "available_software/detail/libunwind/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libunwind, load one of these modules using a module load command like:

                  module load libunwind/1.6.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libunwind/1.6.2-GCCcore-12.3.0 x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x libunwind/1.6.2-GCCcore-11.3.0 x x x x x x libunwind/1.5.0-GCCcore-11.2.0 x x x x x x libunwind/1.4.0-GCCcore-10.3.0 x x x x x x libunwind/1.4.0-GCCcore-10.2.0 x x x x x x libunwind/1.3.1-GCCcore-9.3.0 - x x - x x libunwind/1.3.1-GCCcore-8.3.0 x x x - x x libunwind/1.3.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libvdwxc/", "title": "libvdwxc", "text": ""}, {"location": "available_software/detail/libvdwxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvdwxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvdwxc, load one of these modules using a module load command like:

                  module load libvdwxc/0.4.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvdwxc/0.4.0-foss-2021b x x x - x x libvdwxc/0.4.0-foss-2019b - x x - x x"}, {"location": "available_software/detail/libvorbis/", "title": "libvorbis", "text": ""}, {"location": "available_software/detail/libvorbis/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvorbis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvorbis, load one of these modules using a module load command like:

                  module load libvorbis/1.3.7-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x libvorbis/1.3.7-GCCcore-11.3.0 x x x x x x libvorbis/1.3.7-GCCcore-11.2.0 x x x x x x libvorbis/1.3.7-GCCcore-10.3.0 x x x x x x libvorbis/1.3.7-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libvori/", "title": "libvori", "text": ""}, {"location": "available_software/detail/libvori/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libvori installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libvori, load one of these modules using a module load command like:

                  module load libvori/220621-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libvori/220621-GCCcore-12.3.0 x x x x x x libvori/220621-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/libwebp/", "title": "libwebp", "text": ""}, {"location": "available_software/detail/libwebp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwebp, load one of these modules using a module load command like:

                  module load libwebp/1.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwebp/1.3.1-GCCcore-12.3.0 x x x x x x libwebp/1.3.1-GCCcore-12.2.0 x x x x x x libwebp/1.2.4-GCCcore-11.3.0 x x x x x x libwebp/1.2.0-GCCcore-11.2.0 x x x x x x libwebp/1.2.0-GCCcore-10.3.0 x x x - x x libwebp/1.1.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/libwpe/", "title": "libwpe", "text": ""}, {"location": "available_software/detail/libwpe/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libwpe installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libwpe, load one of these modules using a module load command like:

                  module load libwpe/1.13.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libwpe/1.13.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/libxc/", "title": "libxc", "text": ""}, {"location": "available_software/detail/libxc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxc, load one of these modules using a module load command like:

                  module load libxc/6.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxc/6.2.2-GCC-12.3.0 x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x libxc/5.2.3-intel-compilers-2022.1.0 x x x x x x libxc/5.2.3-GCC-11.3.0 x x x x x x libxc/5.1.6-intel-compilers-2021.4.0 x x x x x x libxc/5.1.6-GCC-11.2.0 x x x - x x libxc/5.1.5-intel-compilers-2021.2.0 - x x - x x libxc/5.1.5-GCC-10.3.0 x x x x x x libxc/5.1.2-GCC-10.2.0 - x x x x x libxc/4.3.4-iccifort-2020.4.304 - x x x x x libxc/4.3.4-iccifort-2020.1.217 - x x - x x libxc/4.3.4-iccifort-2019.5.281 - x x - x x libxc/4.3.4-GCC-10.2.0 - x x x x x libxc/4.3.4-GCC-9.3.0 - x x - x x libxc/4.3.4-GCC-8.3.0 - x x - x x libxc/3.0.1-iomkl-2020a - x - - - - libxc/3.0.1-intel-2020a - x x - x x libxc/3.0.1-intel-2019b - x - - - - libxc/3.0.1-foss-2020a - x - - - -"}, {"location": "available_software/detail/libxml%2B%2B/", "title": "libxml++", "text": ""}, {"location": "available_software/detail/libxml%2B%2B/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml++, load one of these modules using a module load command like:

                  module load libxml++/2.42.1-GCC-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml++/2.42.1-GCC-10.3.0 - x x - x x libxml++/2.40.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxml2/", "title": "libxml2", "text": ""}, {"location": "available_software/detail/libxml2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxml2, load one of these modules using a module load command like:

                  module load libxml2/2.11.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxml2/2.11.5-GCCcore-13.2.0 x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x libxml2/2.9.13-GCCcore-11.3.0 x x x x x x libxml2/2.9.10-GCCcore-11.2.0 x x x x x x libxml2/2.9.10-GCCcore-10.3.0 x x x x x x libxml2/2.9.10-GCCcore-10.2.0 x x x x x x libxml2/2.9.10-GCCcore-9.3.0 x x x x x x libxml2/2.9.9-GCCcore-8.3.0 x x x x x x libxml2/2.9.8-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/libxslt/", "title": "libxslt", "text": ""}, {"location": "available_software/detail/libxslt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxslt, load one of these modules using a module load command like:

                  module load libxslt/1.1.38-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxslt/1.1.38-GCCcore-13.2.0 x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x libxslt/1.1.34-GCCcore-11.3.0 x x x x x x libxslt/1.1.34-GCCcore-11.2.0 x x x x x x libxslt/1.1.34-GCCcore-10.3.0 x x x x x x libxslt/1.1.34-GCCcore-10.2.0 x x x x x x libxslt/1.1.34-GCCcore-9.3.0 - x x - x x libxslt/1.1.34-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libxsmm/", "title": "libxsmm", "text": ""}, {"location": "available_software/detail/libxsmm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libxsmm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libxsmm, load one of these modules using a module load command like:

                  module load libxsmm/1.17-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libxsmm/1.17-GCC-12.3.0 x x x x x x libxsmm/1.17-GCC-12.2.0 x x x x x x libxsmm/1.17-GCC-11.3.0 x x x x x x libxsmm/1.16.2-GCC-10.3.0 - x x x x x libxsmm/1.16.1-iccifort-2020.4.304 - x x - x - libxsmm/1.16.1-iccifort-2020.1.217 - x x - x x libxsmm/1.16.1-iccifort-2019.5.281 - x - - - - libxsmm/1.16.1-GCC-10.2.0 - x x x x x libxsmm/1.16.1-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/libyaml/", "title": "libyaml", "text": ""}, {"location": "available_software/detail/libyaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libyaml, load one of these modules using a module load command like:

                  module load libyaml/0.2.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libyaml/0.2.5-GCCcore-12.3.0 x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x libyaml/0.2.5-GCCcore-11.3.0 x x x x x x libyaml/0.2.5-GCCcore-11.2.0 x x x x x x libyaml/0.2.5-GCCcore-10.3.0 x x x x x x libyaml/0.2.5-GCCcore-10.2.0 x x x x x x libyaml/0.2.2-GCCcore-9.3.0 x x x x x x libyaml/0.2.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/libzip/", "title": "libzip", "text": ""}, {"location": "available_software/detail/libzip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which libzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using libzip, load one of these modules using a module load command like:

                  module load libzip/1.7.3-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty libzip/1.7.3-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/lifelines/", "title": "lifelines", "text": ""}, {"location": "available_software/detail/lifelines/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lifelines installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lifelines, load one of these modules using a module load command like:

                  module load lifelines/0.27.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lifelines/0.27.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/likwid/", "title": "likwid", "text": ""}, {"location": "available_software/detail/likwid/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which likwid installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using likwid, load one of these modules using a module load command like:

                  module load likwid/5.0.1-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty likwid/5.0.1-GCCcore-8.3.0 - - x - x -"}, {"location": "available_software/detail/lmoments3/", "title": "lmoments3", "text": ""}, {"location": "available_software/detail/lmoments3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lmoments3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lmoments3, load one of these modules using a module load command like:

                  module load lmoments3/1.0.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lmoments3/1.0.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/longread_umi/", "title": "longread_umi", "text": ""}, {"location": "available_software/detail/longread_umi/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which longread_umi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using longread_umi, load one of these modules using a module load command like:

                  module load longread_umi/0.3.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty longread_umi/0.3.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/loomR/", "title": "loomR", "text": ""}, {"location": "available_software/detail/loomR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loomR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loomR, load one of these modules using a module load command like:

                  module load loomR/0.2.0-20180425-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loomR/0.2.0-20180425-foss-2023a-R-4.3.2 x x x x x x loomR/0.2.0-20180425-foss-2022b-R-4.2.2 x x x x x x loomR/0.2.0-20180425-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/loompy/", "title": "loompy", "text": ""}, {"location": "available_software/detail/loompy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which loompy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using loompy, load one of these modules using a module load command like:

                  module load loompy/3.0.7-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty loompy/3.0.7-intel-2021b x x x - x x loompy/3.0.7-foss-2022a x x x x x x loompy/3.0.7-foss-2021b x x x - x x loompy/3.0.7-foss-2021a x x x x x x loompy/3.0.6-intel-2020b - x x - x x"}, {"location": "available_software/detail/louvain/", "title": "louvain", "text": ""}, {"location": "available_software/detail/louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using louvain, load one of these modules using a module load command like:

                  module load louvain/0.8.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty louvain/0.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/lpsolve/", "title": "lpsolve", "text": ""}, {"location": "available_software/detail/lpsolve/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lpsolve installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lpsolve, load one of these modules using a module load command like:

                  module load lpsolve/5.5.2.11-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lpsolve/5.5.2.11-GCC-11.2.0 x x x x x x lpsolve/5.5.2.11-GCC-10.2.0 x x x x x x lpsolve/5.5.2.5-iccifort-2019.5.281 - x x - x x lpsolve/5.5.2.5-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/lxml/", "title": "lxml", "text": ""}, {"location": "available_software/detail/lxml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lxml, load one of these modules using a module load command like:

                  module load lxml/4.9.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lxml/4.9.3-GCCcore-13.2.0 x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x lxml/4.9.2-GCCcore-12.2.0 x x x x x x lxml/4.9.1-GCCcore-11.3.0 x x x x x x lxml/4.6.3-GCCcore-11.2.0 x x x x x x lxml/4.6.3-GCCcore-10.3.0 x x x x x x lxml/4.6.2-GCCcore-10.2.0 x x x x x x lxml/4.5.2-GCCcore-9.3.0 - x x - x x lxml/4.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/lz4/", "title": "lz4", "text": ""}, {"location": "available_software/detail/lz4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using lz4, load one of these modules using a module load command like:

                  module load lz4/1.9.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty lz4/1.9.4-GCCcore-13.2.0 x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x lz4/1.9.3-GCCcore-11.3.0 x x x x x x lz4/1.9.3-GCCcore-11.2.0 x x x x x x lz4/1.9.3-GCCcore-10.3.0 x x x x x x lz4/1.9.2-GCCcore-10.2.0 x x x x x x lz4/1.9.2-GCCcore-9.3.0 - x x x x x lz4/1.9.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/maeparser/", "title": "maeparser", "text": ""}, {"location": "available_software/detail/maeparser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maeparser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maeparser, load one of these modules using a module load command like:

                  module load maeparser/1.3.0-iimpi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maeparser/1.3.0-iimpi-2020a x x x x x x"}, {"location": "available_software/detail/magma/", "title": "magma", "text": ""}, {"location": "available_software/detail/magma/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which magma installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using magma, load one of these modules using a module load command like:

                  module load magma/2.7.2-foss-2023a-CUDA-12.1.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty magma/2.7.2-foss-2023a-CUDA-12.1.1 x - x - x - magma/2.6.2-foss-2022a-CUDA-11.7.0 x - x - x - magma/2.6.1-foss-2021a-CUDA-11.3.1 x - - - x - magma/2.5.4-fosscuda-2020b x - - - x -"}, {"location": "available_software/detail/mahotas/", "title": "mahotas", "text": ""}, {"location": "available_software/detail/mahotas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mahotas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mahotas, load one of these modules using a module load command like:

                  module load mahotas/1.4.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mahotas/1.4.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/make/", "title": "make", "text": ""}, {"location": "available_software/detail/make/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using make, load one of these modules using a module load command like:

                  module load make/4.4.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty make/4.4.1-GCCcore-13.2.0 x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x make/4.3-GCCcore-12.2.0 - x x - x - make/4.3-GCCcore-11.3.0 x x x - x - make/4.3-GCCcore-11.2.0 x x - x - - make/4.3-GCCcore-10.3.0 x x x - x x make/4.3-GCCcore-10.2.0 x x - - - - make/4.3-GCCcore-9.3.0 - x x - x x make/4.2.1-GCCcore-8.3.0 - x - - - x"}, {"location": "available_software/detail/makedepend/", "title": "makedepend", "text": ""}, {"location": "available_software/detail/makedepend/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makedepend installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makedepend, load one of these modules using a module load command like:

                  module load makedepend/1.0.6-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makedepend/1.0.6-GCCcore-10.3.0 - x x - x x makedepend/1.0.6-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/makeinfo/", "title": "makeinfo", "text": ""}, {"location": "available_software/detail/makeinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which makeinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using makeinfo, load one of these modules using a module load command like:

                  module load makeinfo/7.0.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty makeinfo/7.0.3-GCCcore-12.3.0 x x x x x x makeinfo/6.7-GCCcore-10.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.3.0 - x x - x x makeinfo/6.7-GCCcore-10.2.0-minimal x x x x x x makeinfo/6.7-GCCcore-10.2.0 - x x x x x makeinfo/6.7-GCCcore-9.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-9.3.0 - x x - x x makeinfo/6.7-GCCcore-8.3.0-minimal x x x x x x makeinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/manta/", "title": "manta", "text": ""}, {"location": "available_software/detail/manta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which manta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using manta, load one of these modules using a module load command like:

                  module load manta/1.6.0-gompi-2020a-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty manta/1.6.0-gompi-2020a-Python-2.7.18 - x x - x x"}, {"location": "available_software/detail/mapDamage/", "title": "mapDamage", "text": ""}, {"location": "available_software/detail/mapDamage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mapDamage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mapDamage, load one of these modules using a module load command like:

                  module load mapDamage/2.2.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mapDamage/2.2.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/matplotlib/", "title": "matplotlib", "text": ""}, {"location": "available_software/detail/matplotlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using matplotlib, load one of these modules using a module load command like:

                  module load matplotlib/3.7.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty matplotlib/3.7.2-gfbf-2023a x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x matplotlib/3.5.2-intel-2022a x x x x x x matplotlib/3.5.2-foss-2022a x x x x x x matplotlib/3.5.2-foss-2021b x - x - x - matplotlib/3.4.3-intel-2021b x x x - x x matplotlib/3.4.3-foss-2021b x x x x x x matplotlib/3.4.2-gomkl-2021a x x x x x x matplotlib/3.4.2-foss-2021a x x x x x x matplotlib/3.3.3-intel-2020b - x x - x x matplotlib/3.3.3-fosscuda-2020b x - - - x - matplotlib/3.3.3-foss-2020b x x x x x x matplotlib/3.2.1-intel-2020a-Python-3.8.2 x x x x x x matplotlib/3.2.1-foss-2020a-Python-3.8.2 - x x - x x matplotlib/3.1.1-intel-2019b-Python-3.7.4 - x x - x x matplotlib/3.1.1-foss-2019b-Python-3.7.4 - x x - x x matplotlib/2.2.5-intel-2020a-Python-2.7.18 - x x - x x matplotlib/2.2.5-foss-2020b-Python-2.7.18 - x x x x x matplotlib/2.2.4-intel-2019b-Python-2.7.16 - x x - x x matplotlib/2.2.4-foss-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/maturin/", "title": "maturin", "text": ""}, {"location": "available_software/detail/maturin/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maturin, load one of these modules using a module load command like:

                  module load maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x maturin/1.4.0-GCCcore-12.2.0-Rust-1.75.0 x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x maturin/1.1.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/mauveAligner/", "title": "mauveAligner", "text": ""}, {"location": "available_software/detail/mauveAligner/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mauveAligner installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mauveAligner, load one of these modules using a module load command like:

                  module load mauveAligner/4736-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mauveAligner/4736-gompi-2020a - x x - x x"}, {"location": "available_software/detail/maze/", "title": "maze", "text": ""}, {"location": "available_software/detail/maze/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which maze installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using maze, load one of these modules using a module load command like:

                  module load maze/20170124-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty maze/20170124-foss-2020b - x x x x x"}, {"location": "available_software/detail/mcu/", "title": "mcu", "text": ""}, {"location": "available_software/detail/mcu/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mcu installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mcu, load one of these modules using a module load command like:

                  module load mcu/2021-04-06-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mcu/2021-04-06-gomkl-2021a x x x - x x"}, {"location": "available_software/detail/medImgProc/", "title": "medImgProc", "text": ""}, {"location": "available_software/detail/medImgProc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medImgProc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medImgProc, load one of these modules using a module load command like:

                  module load medImgProc/2.5.7-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medImgProc/2.5.7-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/medaka/", "title": "medaka", "text": ""}, {"location": "available_software/detail/medaka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which medaka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using medaka, load one of these modules using a module load command like:

                  module load medaka/1.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty medaka/1.11.3-foss-2022a x x x x x x medaka/1.9.1-foss-2022a x x x x x x medaka/1.8.1-foss-2022a x x x x x x medaka/1.6.0-foss-2021b x x x - x x medaka/1.4.3-foss-2020b - x x x x x medaka/1.4.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.2.6-foss-2019b-Python-3.7.4 - x - - - - medaka/1.1.3-foss-2019b-Python-3.7.4 - x x - x x medaka/1.1.1-foss-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/meshalyzer/", "title": "meshalyzer", "text": ""}, {"location": "available_software/detail/meshalyzer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshalyzer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshalyzer, load one of these modules using a module load command like:

                  module load meshalyzer/20200308-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshalyzer/20200308-foss-2020a-Python-3.8.2 - x x - x x meshalyzer/2.2-foss-2020b - x x x x x meshalyzer/2.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/meshtool/", "title": "meshtool", "text": ""}, {"location": "available_software/detail/meshtool/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meshtool installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meshtool, load one of these modules using a module load command like:

                  module load meshtool/16-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meshtool/16-GCC-10.2.0 - x x x x x meshtool/16-GCC-9.3.0 - x x - x x"}, {"location": "available_software/detail/meson-python/", "title": "meson-python", "text": ""}, {"location": "available_software/detail/meson-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using meson-python, load one of these modules using a module load command like:

                  module load meson-python/0.15.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty meson-python/0.15.0-GCCcore-13.2.0 x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/metaWRAP/", "title": "metaWRAP", "text": ""}, {"location": "available_software/detail/metaWRAP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaWRAP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaWRAP, load one of these modules using a module load command like:

                  module load metaWRAP/1.3-foss-2020b-Python-2.7.18\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaWRAP/1.3-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/metaerg/", "title": "metaerg", "text": ""}, {"location": "available_software/detail/metaerg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which metaerg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using metaerg, load one of these modules using a module load command like:

                  module load metaerg/1.2.3-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty metaerg/1.2.3-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/methylpy/", "title": "methylpy", "text": ""}, {"location": "available_software/detail/methylpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which methylpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using methylpy, load one of these modules using a module load command like:

                  module load methylpy/1.2.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty methylpy/1.2.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/mgen/", "title": "mgen", "text": ""}, {"location": "available_software/detail/mgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgen, load one of these modules using a module load command like:

                  module load mgen/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgen/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/mgltools/", "title": "mgltools", "text": ""}, {"location": "available_software/detail/mgltools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mgltools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mgltools, load one of these modules using a module load command like:

                  module load mgltools/1.5.7\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mgltools/1.5.7 x x x - x x"}, {"location": "available_software/detail/mhcnuggets/", "title": "mhcnuggets", "text": ""}, {"location": "available_software/detail/mhcnuggets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mhcnuggets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mhcnuggets, load one of these modules using a module load command like:

                  module load mhcnuggets/2.3-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mhcnuggets/2.3-fosscuda-2020b - - - - x - mhcnuggets/2.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/microctools/", "title": "microctools", "text": ""}, {"location": "available_software/detail/microctools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which microctools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using microctools, load one of these modules using a module load command like:

                  module load microctools/0.1.0-20201209-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty microctools/0.1.0-20201209-foss-2020b-R-4.0.4 - x x x x x"}, {"location": "available_software/detail/minibar/", "title": "minibar", "text": ""}, {"location": "available_software/detail/minibar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minibar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minibar, load one of these modules using a module load command like:

                  module load minibar/20200326-iccifort-2020.1.217-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minibar/20200326-iccifort-2020.1.217-Python-3.8.2 - x x - x - minibar/20200326-iccifort-2019.5.281-Python-3.7.4 - x x - x -"}, {"location": "available_software/detail/minimap2/", "title": "minimap2", "text": ""}, {"location": "available_software/detail/minimap2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minimap2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minimap2, load one of these modules using a module load command like:

                  module load minimap2/2.26-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minimap2/2.26-GCCcore-12.3.0 x x x x x x minimap2/2.26-GCCcore-12.2.0 x x x x x x minimap2/2.24-GCCcore-11.3.0 x x x x x x minimap2/2.24-GCCcore-11.2.0 x x x - x x minimap2/2.22-GCCcore-11.2.0 x x x - x x minimap2/2.20-GCCcore-10.3.0 x x x - x x minimap2/2.20-GCCcore-10.2.0 - x x - x x minimap2/2.18-GCCcore-10.2.0 - x x x x x minimap2/2.17-GCCcore-9.3.0 - x x - x x minimap2/2.17-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/minizip/", "title": "minizip", "text": ""}, {"location": "available_software/detail/minizip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which minizip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using minizip, load one of these modules using a module load command like:

                  module load minizip/1.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty minizip/1.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/misha/", "title": "misha", "text": ""}, {"location": "available_software/detail/misha/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which misha installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using misha, load one of these modules using a module load command like:

                  module load misha/4.0.10-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty misha/4.0.10-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/mkl-service/", "title": "mkl-service", "text": ""}, {"location": "available_software/detail/mkl-service/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mkl-service installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mkl-service, load one of these modules using a module load command like:

                  module load mkl-service/2.3.0-intel-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mkl-service/2.3.0-intel-2021b x x x - x x mkl-service/2.3.0-intel-2020b - - x - x x mkl-service/2.3.0-intel-2019b-Python-3.7.4 - - x - x x"}, {"location": "available_software/detail/mm-common/", "title": "mm-common", "text": ""}, {"location": "available_software/detail/mm-common/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mm-common installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mm-common, load one of these modules using a module load command like:

                  module load mm-common/1.0.4-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mm-common/1.0.4-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/molmod/", "title": "molmod", "text": ""}, {"location": "available_software/detail/molmod/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which molmod installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using molmod, load one of these modules using a module load command like:

                  module load molmod/1.4.5-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty molmod/1.4.5-intel-2020a-Python-3.8.2 x x x x x x molmod/1.4.5-intel-2019b-Python-3.7.4 - x x - x x molmod/1.4.5-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/mongolite/", "title": "mongolite", "text": ""}, {"location": "available_software/detail/mongolite/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mongolite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mongolite, load one of these modules using a module load command like:

                  module load mongolite/2.3.0-foss-2020b-R-4.0.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mongolite/2.3.0-foss-2020b-R-4.0.4 - x x x x x mongolite/2.3.0-foss-2020b-R-4.0.3 - x x x x x mongolite/2.3.0-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/monitor/", "title": "monitor", "text": ""}, {"location": "available_software/detail/monitor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which monitor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using monitor, load one of these modules using a module load command like:

                  module load monitor/1.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty monitor/1.1.2 - x x - x -"}, {"location": "available_software/detail/mosdepth/", "title": "mosdepth", "text": ""}, {"location": "available_software/detail/mosdepth/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mosdepth installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mosdepth, load one of these modules using a module load command like:

                  module load mosdepth/0.3.3-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mosdepth/0.3.3-GCC-11.2.0 x x x - x x"}, {"location": "available_software/detail/motionSegmentation/", "title": "motionSegmentation", "text": ""}, {"location": "available_software/detail/motionSegmentation/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which motionSegmentation installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using motionSegmentation, load one of these modules using a module load command like:

                  module load motionSegmentation/2.7.9-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty motionSegmentation/2.7.9-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/mpath/", "title": "mpath", "text": ""}, {"location": "available_software/detail/mpath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpath, load one of these modules using a module load command like:

                  module load mpath/1.1.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpath/1.1.3-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/mpi4py/", "title": "mpi4py", "text": ""}, {"location": "available_software/detail/mpi4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mpi4py, load one of these modules using a module load command like:

                  module load mpi4py/3.1.4-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mpi4py/3.1.4-gompi-2023a x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x"}, {"location": "available_software/detail/mrcfile/", "title": "mrcfile", "text": ""}, {"location": "available_software/detail/mrcfile/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mrcfile installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mrcfile, load one of these modules using a module load command like:

                  module load mrcfile/1.3.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mrcfile/1.3.0-fosscuda-2020b x - - - x - mrcfile/1.3.0-foss-2020b x x x x x x"}, {"location": "available_software/detail/muParser/", "title": "muParser", "text": ""}, {"location": "available_software/detail/muParser/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which muParser installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using muParser, load one of these modules using a module load command like:

                  module load muParser/2.3.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty muParser/2.3.4-GCCcore-12.3.0 x x x x x x muParser/2.3.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/mujoco-py/", "title": "mujoco-py", "text": ""}, {"location": "available_software/detail/mujoco-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mujoco-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mujoco-py, load one of these modules using a module load command like:

                  module load mujoco-py/2.3.7-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mujoco-py/2.3.7-foss-2023a x x x x x x mujoco-py/2.1.2.14-foss-2021b x x x x x x"}, {"location": "available_software/detail/multichoose/", "title": "multichoose", "text": ""}, {"location": "available_software/detail/multichoose/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which multichoose installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using multichoose, load one of these modules using a module load command like:

                  module load multichoose/1.0.3-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty multichoose/1.0.3-GCCcore-11.3.0 x x x x x x multichoose/1.0.3-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/mygene/", "title": "mygene", "text": ""}, {"location": "available_software/detail/mygene/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mygene installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mygene, load one of these modules using a module load command like:

                  module load mygene/3.2.2-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mygene/3.2.2-foss-2022b x x x x x x mygene/3.2.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/mysqlclient/", "title": "mysqlclient", "text": ""}, {"location": "available_software/detail/mysqlclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which mysqlclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using mysqlclient, load one of these modules using a module load command like:

                  module load mysqlclient/2.1.1-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty mysqlclient/2.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/n2v/", "title": "n2v", "text": ""}, {"location": "available_software/detail/n2v/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which n2v installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using n2v, load one of these modules using a module load command like:

                  module load n2v/0.3.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty n2v/0.3.2-foss-2022a-CUDA-11.7.0 x - - - x - n2v/0.3.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/nanocompore/", "title": "nanocompore", "text": ""}, {"location": "available_software/detail/nanocompore/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanocompore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanocompore, load one of these modules using a module load command like:

                  module load nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanocompore/1.0.0rc3-2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/nanofilt/", "title": "nanofilt", "text": ""}, {"location": "available_software/detail/nanofilt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanofilt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanofilt, load one of these modules using a module load command like:

                  module load nanofilt/2.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanofilt/2.6.0-intel-2020a-Python-3.8.2 - x x - x x nanofilt/2.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanoget/", "title": "nanoget", "text": ""}, {"location": "available_software/detail/nanoget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanoget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanoget, load one of these modules using a module load command like:

                  module load nanoget/1.18.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanoget/1.18.1-foss-2022a x x x x x x nanoget/1.18.1-foss-2021a x x x x x x nanoget/1.15.0-intel-2020b - x x - x x nanoget/1.12.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanomath/", "title": "nanomath", "text": ""}, {"location": "available_software/detail/nanomath/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanomath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanomath, load one of these modules using a module load command like:

                  module load nanomath/1.3.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanomath/1.3.0-foss-2022a x x x x x x nanomath/1.2.1-foss-2021a x x x x x x nanomath/1.2.0-intel-2020b - x x - x x nanomath/0.23.1-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nanopolish/", "title": "nanopolish", "text": ""}, {"location": "available_software/detail/nanopolish/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nanopolish installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nanopolish, load one of these modules using a module load command like:

                  module load nanopolish/0.14.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nanopolish/0.14.0-foss-2022a x x x x x x nanopolish/0.13.3-foss-2020b - x x x x x nanopolish/0.13.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/napari/", "title": "napari", "text": ""}, {"location": "available_software/detail/napari/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which napari installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using napari, load one of these modules using a module load command like:

                  module load napari/0.4.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty napari/0.4.18-foss-2022a x x x x x x napari/0.4.15-foss-2021b x x x - x x"}, {"location": "available_software/detail/ncbi-vdb/", "title": "ncbi-vdb", "text": ""}, {"location": "available_software/detail/ncbi-vdb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncbi-vdb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncbi-vdb, load one of these modules using a module load command like:

                  module load ncbi-vdb/3.0.2-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncbi-vdb/3.0.2-gompi-2022a x x x x x x ncbi-vdb/3.0.0-gompi-2021b x x x x x x ncbi-vdb/2.11.2-gompi-2021b x x x x x x ncbi-vdb/2.10.9-gompi-2020b - x x x x x ncbi-vdb/2.10.7-gompi-2020a - x x - x x"}, {"location": "available_software/detail/ncdf4/", "title": "ncdf4", "text": ""}, {"location": "available_software/detail/ncdf4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncdf4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncdf4, load one of these modules using a module load command like:

                  module load ncdf4/1.17-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncdf4/1.17-foss-2021a-R-4.1.0 - x x - x x ncdf4/1.17-foss-2020b-R-4.0.3 x x x x x x ncdf4/1.17-foss-2020a-R-4.0.0 - x x - x x ncdf4/1.17-foss-2019b - x x - x x"}, {"location": "available_software/detail/ncolor/", "title": "ncolor", "text": ""}, {"location": "available_software/detail/ncolor/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncolor installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncolor, load one of these modules using a module load command like:

                  module load ncolor/1.2.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncolor/1.2.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/ncurses/", "title": "ncurses", "text": ""}, {"location": "available_software/detail/ncurses/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncurses installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncurses, load one of these modules using a module load command like:

                  module load ncurses/6.4-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncurses/6.4-GCCcore-13.2.0 x x x x x x ncurses/6.4-GCCcore-12.3.0 x x x x x x ncurses/6.4 x x x x x x ncurses/6.3-GCCcore-12.2.0 x x x x x x ncurses/6.3-GCCcore-11.3.0 x x x x x x ncurses/6.3 x x x x x x ncurses/6.2-GCCcore-11.2.0 x x x x x x ncurses/6.2-GCCcore-10.3.0 x x x x x x ncurses/6.2-GCCcore-10.2.0 x x x x x x ncurses/6.2-GCCcore-9.3.0 x x x x x x ncurses/6.2 x x x x x x ncurses/6.1-GCCcore-8.3.0 x x x x x x ncurses/6.1-GCCcore-8.2.0 - x - - - - ncurses/6.1 x x x x x x ncurses/6.0 x x x x x x"}, {"location": "available_software/detail/ncview/", "title": "ncview", "text": ""}, {"location": "available_software/detail/ncview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ncview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ncview, load one of these modules using a module load command like:

                  module load ncview/2.1.7-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ncview/2.1.7-intel-2019b - x x - x x"}, {"location": "available_software/detail/netCDF-C%2B%2B4/", "title": "netCDF-C++4", "text": ""}, {"location": "available_software/detail/netCDF-C%2B%2B4/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-C++4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-C++4, load one of these modules using a module load command like:

                  module load netCDF-C++4/4.3.1-iimpi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-C++4/4.3.1-iimpi-2020b - x x x x x netCDF-C++4/4.3.1-iimpi-2019b - x x - x x netCDF-C++4/4.3.1-gompi-2021b x x x - x x netCDF-C++4/4.3.1-gompi-2021a - x x - x x netCDF-C++4/4.3.1-gompi-2020a - x x - x x"}, {"location": "available_software/detail/netCDF-Fortran/", "title": "netCDF-Fortran", "text": ""}, {"location": "available_software/detail/netCDF-Fortran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF-Fortran, load one of these modules using a module load command like:

                  module load netCDF-Fortran/4.6.0-iimpi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF-Fortran/4.6.0-iimpi-2022a - - x - x x netCDF-Fortran/4.6.0-gompi-2022a x - x - x - netCDF-Fortran/4.5.3-iimpi-2021b x x x x x x netCDF-Fortran/4.5.3-iimpi-2020b - x x x x x netCDF-Fortran/4.5.3-gompi-2021b x x x x x x netCDF-Fortran/4.5.3-gompi-2021a - x x - x x netCDF-Fortran/4.5.2-iimpi-2020a - x x - x x netCDF-Fortran/4.5.2-iimpi-2019b - x x - x x netCDF-Fortran/4.5.2-gompi-2020a - x x - x x netCDF-Fortran/4.5.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/netCDF/", "title": "netCDF", "text": ""}, {"location": "available_software/detail/netCDF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netCDF, load one of these modules using a module load command like:

                  module load netCDF/4.9.2-gompi-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netCDF/4.9.2-gompi-2023a x x x x x x netCDF/4.9.0-iimpi-2022a - - x - x x netCDF/4.9.0-gompi-2022b x x x x x x netCDF/4.9.0-gompi-2022a x x x x x x netCDF/4.8.1-iimpi-2021b x x x x x x netCDF/4.8.1-gompi-2021b x x x x x x netCDF/4.8.0-iimpi-2021a - x x - x x netCDF/4.8.0-gompi-2021a x x x x x x netCDF/4.7.4-iimpi-2020b - x x x x x netCDF/4.7.4-iimpi-2020a - x x - x x netCDF/4.7.4-gompic-2020b - - - - x - netCDF/4.7.4-gompi-2020b x x x x x x netCDF/4.7.4-gompi-2020a - x x - x x netCDF/4.7.1-iimpi-2019b - x x - x x netCDF/4.7.1-gompi-2019b x x x - x x"}, {"location": "available_software/detail/netcdf4-python/", "title": "netcdf4-python", "text": ""}, {"location": "available_software/detail/netcdf4-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which netcdf4-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using netcdf4-python, load one of these modules using a module load command like:

                  module load netcdf4-python/1.6.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty netcdf4-python/1.6.4-foss-2023a x x x x x x netcdf4-python/1.6.1-foss-2022a x x x x x x netcdf4-python/1.5.7-intel-2021b x x x - x x netcdf4-python/1.5.7-foss-2021b x x x x x x netcdf4-python/1.5.7-foss-2021a x x x x x x netcdf4-python/1.5.5.1-intel-2020b - x x - x x netcdf4-python/1.5.5.1-fosscuda-2020b - - - - x - netcdf4-python/1.5.3-intel-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-intel-2019b-Python-3.7.4 - x x - x x netcdf4-python/1.5.3-foss-2020a-Python-3.8.2 - x x - x x netcdf4-python/1.5.3-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nettle/", "title": "nettle", "text": ""}, {"location": "available_software/detail/nettle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nettle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nettle, load one of these modules using a module load command like:

                  module load nettle/3.9.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nettle/3.9.1-GCCcore-12.3.0 x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x nettle/3.8-GCCcore-11.3.0 x x x x x x nettle/3.7.3-GCCcore-11.2.0 x x x x x x nettle/3.7.2-GCCcore-10.3.0 x x x x x x nettle/3.6-GCCcore-10.2.0 x x x x x x nettle/3.6-GCCcore-9.3.0 - x x - x x nettle/3.5.1-GCCcore-8.3.0 x x x - x x nettle/3.4.1-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/networkx/", "title": "networkx", "text": ""}, {"location": "available_software/detail/networkx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using networkx, load one of these modules using a module load command like:

                  module load networkx/3.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty networkx/3.1-gfbf-2023a x x x x x x networkx/3.0-gfbf-2022b x x x x x x networkx/3.0-foss-2022b x x x x x x networkx/2.8.4-intel-2022a x x x x x x networkx/2.8.4-foss-2022a x x x x x x networkx/2.6.3-foss-2021b x x x x x x networkx/2.5.1-foss-2021a x x x x x x networkx/2.5-fosscuda-2020b x - - - x - networkx/2.5-foss-2020b - x x x x x networkx/2.4-intel-2020a-Python-3.8.2 - x x - x x networkx/2.4-intel-2019b-Python-3.7.4 - x x - x x networkx/2.4-foss-2020a-Python-3.8.2 - x x - x x networkx/2.4-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nghttp2/", "title": "nghttp2", "text": ""}, {"location": "available_software/detail/nghttp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp2, load one of these modules using a module load command like:

                  module load nghttp2/1.48.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp2/1.48.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nghttp3/", "title": "nghttp3", "text": ""}, {"location": "available_software/detail/nghttp3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nghttp3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nghttp3, load one of these modules using a module load command like:

                  module load nghttp3/0.6.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nghttp3/0.6.0-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/nglview/", "title": "nglview", "text": ""}, {"location": "available_software/detail/nglview/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nglview installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nglview, load one of these modules using a module load command like:

                  module load nglview/2.7.7-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nglview/2.7.7-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/ngtcp2/", "title": "ngtcp2", "text": ""}, {"location": "available_software/detail/ngtcp2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ngtcp2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ngtcp2, load one of these modules using a module load command like:

                  module load ngtcp2/0.7.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ngtcp2/0.7.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/nichenetr/", "title": "nichenetr", "text": ""}, {"location": "available_software/detail/nichenetr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nichenetr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nichenetr, load one of these modules using a module load command like:

                  module load nichenetr/2.0.4-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nichenetr/2.0.4-foss-2022b-R-4.2.2 x x x x x x nichenetr/1.1.1-20230223-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/nlohmann_json/", "title": "nlohmann_json", "text": ""}, {"location": "available_software/detail/nlohmann_json/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nlohmann_json, load one of these modules using a module load command like:

                  module load nlohmann_json/3.11.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x nlohmann_json/3.10.5-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/nnU-Net/", "title": "nnU-Net", "text": ""}, {"location": "available_software/detail/nnU-Net/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nnU-Net installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nnU-Net, load one of these modules using a module load command like:

                  module load nnU-Net/1.7.0-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nnU-Net/1.7.0-fosscuda-2020b x - - - x - nnU-Net/1.7.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/nodejs/", "title": "nodejs", "text": ""}, {"location": "available_software/detail/nodejs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nodejs, load one of these modules using a module load command like:

                  module load nodejs/18.17.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nodejs/18.17.1-GCCcore-12.3.0 x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x nodejs/16.15.1-GCCcore-11.3.0 x x x x x x nodejs/14.17.6-GCCcore-11.2.0 x x x x x x nodejs/14.17.0-GCCcore-10.3.0 x x x x x x nodejs/12.19.0-GCCcore-10.2.0 x x x x x x nodejs/12.16.1-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/noise/", "title": "noise", "text": ""}, {"location": "available_software/detail/noise/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which noise installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using noise, load one of these modules using a module load command like:

                  module load noise/1.2.2-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty noise/1.2.2-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/nsync/", "title": "nsync", "text": ""}, {"location": "available_software/detail/nsync/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nsync, load one of these modules using a module load command like:

                  module load nsync/1.26.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nsync/1.26.0-GCCcore-12.3.0 x x x x x x nsync/1.26.0-GCCcore-12.2.0 x x x x x x nsync/1.25.0-GCCcore-11.3.0 x x x x x x nsync/1.24.0-GCCcore-11.2.0 x x x x x x nsync/1.24.0-GCCcore-10.3.0 x x x x x x nsync/1.24.0-GCCcore-10.2.0 x x x x x x nsync/1.24.0-GCCcore-9.3.0 - x x - x x nsync/1.24.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/ntCard/", "title": "ntCard", "text": ""}, {"location": "available_software/detail/ntCard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ntCard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ntCard, load one of these modules using a module load command like:

                  module load ntCard/1.2.2-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ntCard/1.2.2-GCC-12.3.0 x x x x x x ntCard/1.2.1-GCC-8.3.0 - x x - x -"}, {"location": "available_software/detail/num2words/", "title": "num2words", "text": ""}, {"location": "available_software/detail/num2words/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which num2words installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using num2words, load one of these modules using a module load command like:

                  module load num2words/0.5.10-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty num2words/0.5.10-GCCcore-10.3.0 x - - - x -"}, {"location": "available_software/detail/numactl/", "title": "numactl", "text": ""}, {"location": "available_software/detail/numactl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numactl, load one of these modules using a module load command like:

                  module load numactl/2.0.16-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numactl/2.0.16-GCCcore-13.2.0 x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x numactl/2.0.14-GCCcore-11.3.0 x x x x x x numactl/2.0.14-GCCcore-11.2.0 x x x x x x numactl/2.0.14-GCCcore-10.3.0 x x x x x x numactl/2.0.13-GCCcore-10.2.0 x x x x x x numactl/2.0.13-GCCcore-9.3.0 x x x x x x numactl/2.0.12-GCCcore-8.3.0 x x x x x x"}, {"location": "available_software/detail/numba/", "title": "numba", "text": ""}, {"location": "available_software/detail/numba/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numba installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numba, load one of these modules using a module load command like:

                  module load numba/0.58.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numba/0.58.1-foss-2023a x x x x x x numba/0.58.1-foss-2022b x x x x x x numba/0.56.4-foss-2022a-CUDA-11.7.0 x - x - x - numba/0.56.4-foss-2022a x x x x x x numba/0.54.1-intel-2021b x x x - x x numba/0.54.1-foss-2021b-CUDA-11.4.1 x - - - x - numba/0.54.1-foss-2021b x x x x x x numba/0.53.1-fosscuda-2020b - - - - x - numba/0.53.1-foss-2021a x x x x x x numba/0.53.1-foss-2020b - x x x x x numba/0.52.0-intel-2020b - x x - x x numba/0.52.0-fosscuda-2020b - - - - x - numba/0.52.0-foss-2020b - x x x x x numba/0.50.0-intel-2020a-Python-3.8.2 - x x - x x numba/0.50.0-foss-2020a-Python-3.8.2 - x x - x x numba/0.47.0-foss-2019b-Python-3.7.4 x x x - x x"}, {"location": "available_software/detail/numexpr/", "title": "numexpr", "text": ""}, {"location": "available_software/detail/numexpr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which numexpr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using numexpr, load one of these modules using a module load command like:

                  module load numexpr/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty numexpr/2.7.1-intel-2020a-Python-3.8.2 x x x x x x numexpr/2.7.1-intel-2019b-Python-2.7.16 - x - - - x numexpr/2.7.1-foss-2020a-Python-3.8.2 - x x - x x numexpr/2.7.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/nvtop/", "title": "nvtop", "text": ""}, {"location": "available_software/detail/nvtop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which nvtop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using nvtop, load one of these modules using a module load command like:

                  module load nvtop/1.2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty nvtop/1.2.1-GCCcore-10.3.0 x - - - - -"}, {"location": "available_software/detail/olaFlow/", "title": "olaFlow", "text": ""}, {"location": "available_software/detail/olaFlow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olaFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olaFlow, load one of these modules using a module load command like:

                  module load olaFlow/20210820-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olaFlow/20210820-foss-2021b x x x - x x"}, {"location": "available_software/detail/olego/", "title": "olego", "text": ""}, {"location": "available_software/detail/olego/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which olego installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using olego, load one of these modules using a module load command like:

                  module load olego/1.1.9-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty olego/1.1.9-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/onedrive/", "title": "onedrive", "text": ""}, {"location": "available_software/detail/onedrive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which onedrive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using onedrive, load one of these modules using a module load command like:

                  module load onedrive/2.4.21-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty onedrive/2.4.21-GCCcore-11.3.0 x x x - x x"}, {"location": "available_software/detail/ont-fast5-api/", "title": "ont-fast5-api", "text": ""}, {"location": "available_software/detail/ont-fast5-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ont-fast5-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ont-fast5-api, load one of these modules using a module load command like:

                  module load ont-fast5-api/4.1.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ont-fast5-api/4.1.1-foss-2022b x x x x x x ont-fast5-api/4.1.1-foss-2022a x x x x x x ont-fast5-api/4.0.2-foss-2021b x x x - x x ont-fast5-api/4.0.0-foss-2021a x x x - x x ont-fast5-api/3.3.0-fosscuda-2020b - - - - x - ont-fast5-api/3.3.0-foss-2020b - x x x x x ont-fast5-api/3.3.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/openCARP/", "title": "openCARP", "text": ""}, {"location": "available_software/detail/openCARP/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openCARP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openCARP, load one of these modules using a module load command like:

                  module load openCARP/6.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openCARP/6.0-foss-2020b - x x x x x openCARP/3.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/openkim-models/", "title": "openkim-models", "text": ""}, {"location": "available_software/detail/openkim-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openkim-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openkim-models, load one of these modules using a module load command like:

                  module load openkim-models/20190725-intel-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openkim-models/20190725-intel-2019b - x x - x x openkim-models/20190725-foss-2019b - x x - x x"}, {"location": "available_software/detail/openpyxl/", "title": "openpyxl", "text": ""}, {"location": "available_software/detail/openpyxl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openpyxl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openpyxl, load one of these modules using a module load command like:

                  module load openpyxl/3.1.2-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openpyxl/3.1.2-GCCcore-13.2.0 x x x x x x openpyxl/3.1.2-GCCcore-12.3.0 x x x x x x openpyxl/3.1.2-GCCcore-12.2.0 x x x x x x openpyxl/3.0.10-GCCcore-11.3.0 x x x x x x openpyxl/3.0.9-GCCcore-11.2.0 x x x x x x openpyxl/3.0.7-GCCcore-10.3.0 x x x x x x openpyxl/2.6.4-GCCcore-8.3.0-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/openslide-python/", "title": "openslide-python", "text": ""}, {"location": "available_software/detail/openslide-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which openslide-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using openslide-python, load one of these modules using a module load command like:

                  module load openslide-python/1.2.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty openslide-python/1.2.0-GCCcore-11.3.0 x - x - x - openslide-python/1.1.2-GCCcore-11.2.0 x x x - x x openslide-python/1.1.2-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/orca/", "title": "orca", "text": ""}, {"location": "available_software/detail/orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using orca, load one of these modules using a module load command like:

                  module load orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty orca/1.3.1-GCCcore-10.2.0 - x - - - - orca/1.3.0-GCCcore-8.3.0 - x - - - -"}, {"location": "available_software/detail/p11-kit/", "title": "p11-kit", "text": ""}, {"location": "available_software/detail/p11-kit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p11-kit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p11-kit, load one of these modules using a module load command like:

                  module load p11-kit/0.24.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p11-kit/0.24.1-GCCcore-11.2.0 x x x x x x p11-kit/0.24.0-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/p4est/", "title": "p4est", "text": ""}, {"location": "available_software/detail/p4est/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p4est installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p4est, load one of these modules using a module load command like:

                  module load p4est/2.8-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p4est/2.8-foss-2021a - x x - x x"}, {"location": "available_software/detail/p7zip/", "title": "p7zip", "text": ""}, {"location": "available_software/detail/p7zip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which p7zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using p7zip, load one of these modules using a module load command like:

                  module load p7zip/17.03-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty p7zip/17.03-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/pIRS/", "title": "pIRS", "text": ""}, {"location": "available_software/detail/pIRS/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pIRS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pIRS, load one of these modules using a module load command like:

                  module load pIRS/2.0.2-gompi-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pIRS/2.0.2-gompi-2019b - x x - x x"}, {"location": "available_software/detail/packmol/", "title": "packmol", "text": ""}, {"location": "available_software/detail/packmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which packmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using packmol, load one of these modules using a module load command like:

                  module load packmol/v20.2.2-iccifort-2020.1.217\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty packmol/v20.2.2-iccifort-2020.1.217 - x x - x x"}, {"location": "available_software/detail/pagmo/", "title": "pagmo", "text": ""}, {"location": "available_software/detail/pagmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pagmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pagmo, load one of these modules using a module load command like:

                  module load pagmo/2.17.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pagmo/2.17.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/pairtools/", "title": "pairtools", "text": ""}, {"location": "available_software/detail/pairtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pairtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pairtools, load one of these modules using a module load command like:

                  module load pairtools/0.3.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pairtools/0.3.0-foss-2021b x x x - x x"}, {"location": "available_software/detail/panaroo/", "title": "panaroo", "text": ""}, {"location": "available_software/detail/panaroo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which panaroo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using panaroo, load one of these modules using a module load command like:

                  module load panaroo/1.2.8-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty panaroo/1.2.8-foss-2020b - x x x x x"}, {"location": "available_software/detail/pandas/", "title": "pandas", "text": ""}, {"location": "available_software/detail/pandas/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pandas installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pandas, load one of these modules using a module load command like:

                  module load pandas/1.1.2-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pandas/1.1.2-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel-fastq-dump/", "title": "parallel-fastq-dump", "text": ""}, {"location": "available_software/detail/parallel-fastq-dump/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel-fastq-dump installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel-fastq-dump, load one of these modules using a module load command like:

                  module load parallel-fastq-dump/0.6.7-gompi-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel-fastq-dump/0.6.7-gompi-2022a x x x x x x parallel-fastq-dump/0.6.7-gompi-2020b - x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-SRA-Toolkit-3.0.0-Python-3.8.2 x x x - x x parallel-fastq-dump/0.6.6-GCCcore-9.3.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/parallel/", "title": "parallel", "text": ""}, {"location": "available_software/detail/parallel/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parallel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parallel, load one of these modules using a module load command like:

                  module load parallel/20230722-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parallel/20230722-GCCcore-12.2.0 x x x x x x parallel/20220722-GCCcore-11.3.0 x x x x x x parallel/20210722-GCCcore-11.2.0 - x x x x x parallel/20210622-GCCcore-10.3.0 - x x x x x parallel/20210322-GCCcore-10.2.0 - x x x x x parallel/20200522-GCCcore-9.3.0 - x x - x x parallel/20190922-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/parasail/", "title": "parasail", "text": ""}, {"location": "available_software/detail/parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using parasail, load one of these modules using a module load command like:

                  module load parasail/2.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty parasail/2.6-GCC-11.3.0 x x x x x x parasail/2.5-GCC-11.2.0 x x x - x x parasail/2.4.3-GCC-10.3.0 x x x - x x parasail/2.4.3-GCC-10.2.0 - - x - x - parasail/2.4.2-iccifort-2020.1.217 - x x - x x parasail/2.4.1-intel-2019b - x x - x x parasail/2.4.1-foss-2019b - x - - - - parasail/2.4.1-GCC-8.3.0 - - x - x x"}, {"location": "available_software/detail/patchelf/", "title": "patchelf", "text": ""}, {"location": "available_software/detail/patchelf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using patchelf, load one of these modules using a module load command like:

                  module load patchelf/0.18.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty patchelf/0.18.0-GCCcore-13.2.0 x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x patchelf/0.17.2-GCCcore-12.2.0 x x x x x x patchelf/0.15.0-GCCcore-11.3.0 x x x x x x patchelf/0.13-GCCcore-11.2.0 x x x x x x patchelf/0.12-GCCcore-10.3.0 - x x - x x patchelf/0.12-GCCcore-9.3.0 - x x - x x patchelf/0.10-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pauvre/", "title": "pauvre", "text": ""}, {"location": "available_software/detail/pauvre/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pauvre installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pauvre, load one of these modules using a module load command like:

                  module load pauvre/0.1924-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pauvre/0.1924-intel-2020b - x x - x x pauvre/0.1923-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pblat/", "title": "pblat", "text": ""}, {"location": "available_software/detail/pblat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pblat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pblat, load one of these modules using a module load command like:

                  module load pblat/2.5.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pblat/2.5.1-foss-2022b x x x x x x"}, {"location": "available_software/detail/pdsh/", "title": "pdsh", "text": ""}, {"location": "available_software/detail/pdsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pdsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pdsh, load one of these modules using a module load command like:

                  module load pdsh/2.34-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pdsh/2.34-GCCcore-12.3.0 x x x x x x pdsh/2.34-GCCcore-12.2.0 x x x x x x pdsh/2.34-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/peakdetect/", "title": "peakdetect", "text": ""}, {"location": "available_software/detail/peakdetect/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which peakdetect installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using peakdetect, load one of these modules using a module load command like:

                  module load peakdetect/1.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty peakdetect/1.2-foss-2022a x x x x x x"}, {"location": "available_software/detail/petsc4py/", "title": "petsc4py", "text": ""}, {"location": "available_software/detail/petsc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which petsc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using petsc4py, load one of these modules using a module load command like:

                  module load petsc4py/3.17.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty petsc4py/3.17.4-foss-2022a x x x x x x petsc4py/3.15.0-foss-2021a - x x - x x petsc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pftoolsV3/", "title": "pftoolsV3", "text": ""}, {"location": "available_software/detail/pftoolsV3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pftoolsV3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pftoolsV3, load one of these modules using a module load command like:

                  module load pftoolsV3/3.2.11-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pftoolsV3/3.2.11-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phonemizer/", "title": "phonemizer", "text": ""}, {"location": "available_software/detail/phonemizer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonemizer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonemizer, load one of these modules using a module load command like:

                  module load phonemizer/2.2.1-gompi-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonemizer/2.2.1-gompi-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/phonopy/", "title": "phonopy", "text": ""}, {"location": "available_software/detail/phonopy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phonopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phonopy, load one of these modules using a module load command like:

                  module load phonopy/2.7.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phonopy/2.7.1-intel-2020a-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/phototonic/", "title": "phototonic", "text": ""}, {"location": "available_software/detail/phototonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phototonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phototonic, load one of these modules using a module load command like:

                  module load phototonic/2.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phototonic/2.1-GCCcore-10.3.0 - x x - x x"}, {"location": "available_software/detail/phyluce/", "title": "phyluce", "text": ""}, {"location": "available_software/detail/phyluce/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which phyluce installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using phyluce, load one of these modules using a module load command like:

                  module load phyluce/1.7.3-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty phyluce/1.7.3-foss-2023a x x x x x x"}, {"location": "available_software/detail/picard/", "title": "picard", "text": ""}, {"location": "available_software/detail/picard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which picard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using picard, load one of these modules using a module load command like:

                  module load picard/2.25.1-Java-11\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty picard/2.25.1-Java-11 x x x x x x picard/2.25.0-Java-11 - x x x x x picard/2.21.6-Java-11 - x x - x x picard/2.21.1-Java-11 - - x - x x picard/2.18.27-Java-1.8 - - - - - x"}, {"location": "available_software/detail/pigz/", "title": "pigz", "text": ""}, {"location": "available_software/detail/pigz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pigz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pigz, load one of these modules using a module load command like:

                  module load pigz/2.8-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pigz/2.8-GCCcore-12.3.0 x x x x x x pigz/2.7-GCCcore-11.3.0 x x x x x x pigz/2.6-GCCcore-11.2.0 x x x - x x pigz/2.6-GCCcore-10.2.0 - x x x x x pigz/2.4-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pixman/", "title": "pixman", "text": ""}, {"location": "available_software/detail/pixman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pixman, load one of these modules using a module load command like:

                  module load pixman/0.42.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pixman/0.42.2-GCCcore-12.3.0 x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x pixman/0.40.0-GCCcore-11.3.0 x x x x x x pixman/0.40.0-GCCcore-11.2.0 x x x x x x pixman/0.40.0-GCCcore-10.3.0 x x x x x x pixman/0.40.0-GCCcore-10.2.0 x x x x x x pixman/0.38.4-GCCcore-9.3.0 x x x x x x pixman/0.38.4-GCCcore-8.3.0 x x x - x x pixman/0.38.0-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/pkg-config/", "title": "pkg-config", "text": ""}, {"location": "available_software/detail/pkg-config/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkg-config installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkg-config, load one of these modules using a module load command like:

                  module load pkg-config/0.29.2-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkg-config/0.29.2-GCCcore-12.2.0 x x x x x x pkg-config/0.29.2-GCCcore-11.3.0 x x x x x x pkg-config/0.29.2-GCCcore-11.2.0 x x x x x x pkg-config/0.29.2-GCCcore-10.3.0 x x x x x x pkg-config/0.29.2-GCCcore-10.2.0 x x x x x x pkg-config/0.29.2-GCCcore-9.3.0 x x x x x x pkg-config/0.29.2-GCCcore-8.3.0 x x x - x x pkg-config/0.29.2-GCCcore-8.2.0 - x - - - - pkg-config/0.29.2 x x x - x x"}, {"location": "available_software/detail/pkgconf/", "title": "pkgconf", "text": ""}, {"location": "available_software/detail/pkgconf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconf, load one of these modules using a module load command like:

                  module load pkgconf/2.0.3-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x pkgconf/1.8.0-GCCcore-11.3.0 x x x x x x pkgconf/1.8.0-GCCcore-11.2.0 x x x x x x pkgconf/1.8.0 x x x x x x"}, {"location": "available_software/detail/pkgconfig/", "title": "pkgconfig", "text": ""}, {"location": "available_software/detail/pkgconfig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pkgconfig, load one of these modules using a module load command like:

                  module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.3.0-python x x x x x x pkgconfig/1.5.5-GCCcore-11.2.0-python x x x x x x pkgconfig/1.5.4-GCCcore-10.3.0-python x x x x x x pkgconfig/1.5.1-GCCcore-10.2.0-python x x x x x x pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2 x x x x x x pkgconfig/1.5.1-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/plot1cell/", "title": "plot1cell", "text": ""}, {"location": "available_software/detail/plot1cell/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plot1cell installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plot1cell, load one of these modules using a module load command like:

                  module load plot1cell/0.0.1-foss-2022b-R-4.2.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plot1cell/0.0.1-foss-2022b-R-4.2.2 x x x x x x plot1cell/0.0.1-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/plotly-orca/", "title": "plotly-orca", "text": ""}, {"location": "available_software/detail/plotly-orca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly-orca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly-orca, load one of these modules using a module load command like:

                  module load plotly-orca/1.3.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly-orca/1.3.1-GCCcore-10.2.0 - x x x x x plotly-orca/1.3.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/plotly.py/", "title": "plotly.py", "text": ""}, {"location": "available_software/detail/plotly.py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which plotly.py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using plotly.py, load one of these modules using a module load command like:

                  module load plotly.py/5.16.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty plotly.py/5.16.0-GCCcore-12.3.0 x x x x x x plotly.py/5.13.1-GCCcore-12.2.0 x x x x x x plotly.py/5.12.0-GCCcore-11.3.0 x x x x x x plotly.py/5.10.0-GCCcore-11.3.0 x x x - x x plotly.py/5.4.0-GCCcore-11.2.0 x x x - x x plotly.py/5.1.0-GCCcore-10.3.0 x x x - x x plotly.py/4.14.3-GCCcore-10.2.0 - x x x x x plotly.py/4.8.1-GCCcore-9.3.0 - x x - x x plotly.py/4.4.1-intel-2019b - x x - x x"}, {"location": "available_software/detail/pocl/", "title": "pocl", "text": ""}, {"location": "available_software/detail/pocl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pocl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pocl, load one of these modules using a module load command like:

                  module load pocl/4.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pocl/4.0-GCC-12.3.0 x x x x x x pocl/3.0-GCC-11.3.0 x x x - x x pocl/1.8-GCC-11.3.0-CUDA-11.7.0 x - - - x - pocl/1.8-GCC-11.3.0 x x x x x x pocl/1.8-GCC-11.2.0 x x x - x x pocl/1.6-gcccuda-2020b - - - - x - pocl/1.6-GCC-10.2.0 - x x x x x pocl/1.4-gcccuda-2019b x - - - x -"}, {"location": "available_software/detail/pod5-file-format/", "title": "pod5-file-format", "text": ""}, {"location": "available_software/detail/pod5-file-format/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pod5-file-format installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pod5-file-format, load one of these modules using a module load command like:

                  module load pod5-file-format/0.1.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pod5-file-format/0.1.8-foss-2022a x x x x x x"}, {"location": "available_software/detail/poetry/", "title": "poetry", "text": ""}, {"location": "available_software/detail/poetry/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poetry, load one of these modules using a module load command like:

                  module load poetry/1.7.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poetry/1.7.1-GCCcore-12.3.0 x x x x x x poetry/1.6.1-GCCcore-13.2.0 x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2 x x x x x x"}, {"location": "available_software/detail/polars/", "title": "polars", "text": ""}, {"location": "available_software/detail/polars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which polars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using polars, load one of these modules using a module load command like:

                  module load polars/0.15.6-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty polars/0.15.6-foss-2022a x x x x x x"}, {"location": "available_software/detail/poppler/", "title": "poppler", "text": ""}, {"location": "available_software/detail/poppler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which poppler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using poppler, load one of these modules using a module load command like:

                  module load poppler/23.09.0-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty poppler/23.09.0-GCC-12.3.0 x x x x x x poppler/22.01.0-GCC-11.2.0 x x x - x x poppler/21.06.1-GCC-10.3.0 - x x - x -"}, {"location": "available_software/detail/popscle/", "title": "popscle", "text": ""}, {"location": "available_software/detail/popscle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which popscle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using popscle, load one of these modules using a module load command like:

                  module load popscle/0.1-beta-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty popscle/0.1-beta-foss-2019b - x x - x x popscle/0.1-beta-20210505-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/porefoam/", "title": "porefoam", "text": ""}, {"location": "available_software/detail/porefoam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which porefoam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using porefoam, load one of these modules using a module load command like:

                  module load porefoam/2021-09-21-foss-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty porefoam/2021-09-21-foss-2020a - x x - x x"}, {"location": "available_software/detail/powerlaw/", "title": "powerlaw", "text": ""}, {"location": "available_software/detail/powerlaw/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which powerlaw installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using powerlaw, load one of these modules using a module load command like:

                  module load powerlaw/1.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty powerlaw/1.5-foss-2022a x x x x x x"}, {"location": "available_software/detail/pplacer/", "title": "pplacer", "text": ""}, {"location": "available_software/detail/pplacer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pplacer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pplacer, load one of these modules using a module load command like:

                  module load pplacer/1.1.alpha19\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pplacer/1.1.alpha19 x x x x x x"}, {"location": "available_software/detail/preseq/", "title": "preseq", "text": ""}, {"location": "available_software/detail/preseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which preseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using preseq, load one of these modules using a module load command like:

                  module load preseq/3.2.0-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty preseq/3.2.0-GCC-11.3.0 x x x x x x"}, {"location": "available_software/detail/presto/", "title": "presto", "text": ""}, {"location": "available_software/detail/presto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which presto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using presto, load one of these modules using a module load command like:

                  module load presto/1.0.0-20230501-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty presto/1.0.0-20230501-foss-2023a-R-4.3.2 x x x x x x presto/1.0.0-20230113-foss-2022a-R-4.2.1 x x x x x x presto/1.0.0-20200718-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/pretty-yaml/", "title": "pretty-yaml", "text": ""}, {"location": "available_software/detail/pretty-yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pretty-yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pretty-yaml, load one of these modules using a module load command like:

                  module load pretty-yaml/21.10.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pretty-yaml/21.10.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/prodigal/", "title": "prodigal", "text": ""}, {"location": "available_software/detail/prodigal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prodigal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prodigal, load one of these modules using a module load command like:

                  module load prodigal/2.6.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prodigal/2.6.3-GCCcore-12.3.0 x x x x x x prodigal/2.6.3-GCCcore-12.2.0 x x x x x x prodigal/2.6.3-GCCcore-11.3.0 x x x x x x prodigal/2.6.3-GCCcore-11.2.0 x x x x x x prodigal/2.6.3-GCCcore-10.2.0 x x x x x x prodigal/2.6.3-GCCcore-9.3.0 - x x - x x prodigal/2.6.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/prokka/", "title": "prokka", "text": ""}, {"location": "available_software/detail/prokka/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which prokka installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using prokka, load one of these modules using a module load command like:

                  module load prokka/1.14.5-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty prokka/1.14.5-gompi-2020b - x x x x x prokka/1.14.5-gompi-2019b - x x - x x"}, {"location": "available_software/detail/protobuf-python/", "title": "protobuf-python", "text": ""}, {"location": "available_software/detail/protobuf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf-python, load one of these modules using a module load command like:

                  module load protobuf-python/4.24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x protobuf-python/4.23.0-GCCcore-12.2.0 x x x x x x protobuf-python/3.19.4-GCCcore-11.3.0 x x x x x x protobuf-python/3.17.3-GCCcore-11.2.0 x x x x x x protobuf-python/3.17.3-GCCcore-10.3.0 x x x x x x protobuf-python/3.14.0-GCCcore-10.2.0 x x x x x x protobuf-python/3.13.0-foss-2020a-Python-3.8.2 - x x - x x protobuf-python/3.10.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/protobuf/", "title": "protobuf", "text": ""}, {"location": "available_software/detail/protobuf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using protobuf, load one of these modules using a module load command like:

                  module load protobuf/24.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty protobuf/24.0-GCCcore-12.3.0 x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x protobuf/3.19.4-GCCcore-11.3.0 x x x x x x protobuf/3.17.3-GCCcore-11.2.0 x x x x x x protobuf/3.17.3-GCCcore-10.3.0 x x x x x x protobuf/3.14.0-GCCcore-10.2.0 x x x x x x protobuf/3.13.0-GCCcore-9.3.0 - x x - x x protobuf/3.10.0-GCCcore-8.3.0 - x x - x x protobuf/2.5.0-GCCcore-10.2.0 - x x - x x protobuf/2.5.0-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/psutil/", "title": "psutil", "text": ""}, {"location": "available_software/detail/psutil/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psutil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psutil, load one of these modules using a module load command like:

                  module load psutil/5.9.5-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psutil/5.9.5-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/psycopg2/", "title": "psycopg2", "text": ""}, {"location": "available_software/detail/psycopg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which psycopg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using psycopg2, load one of these modules using a module load command like:

                  module load psycopg2/2.9.6-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty psycopg2/2.9.6-GCCcore-11.3.0 x x x x x x psycopg2/2.9.5-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pugixml/", "title": "pugixml", "text": ""}, {"location": "available_software/detail/pugixml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pugixml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pugixml, load one of these modules using a module load command like:

                  module load pugixml/1.12.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pugixml/1.12.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/pullseq/", "title": "pullseq", "text": ""}, {"location": "available_software/detail/pullseq/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pullseq installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pullseq, load one of these modules using a module load command like:

                  module load pullseq/1.0.2-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pullseq/1.0.2-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/purge_dups/", "title": "purge_dups", "text": ""}, {"location": "available_software/detail/purge_dups/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which purge_dups installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using purge_dups, load one of these modules using a module load command like:

                  module load purge_dups/1.2.5-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty purge_dups/1.2.5-foss-2021b x x x - x x"}, {"location": "available_software/detail/pv/", "title": "pv", "text": ""}, {"location": "available_software/detail/pv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pv, load one of these modules using a module load command like:

                  module load pv/1.7.24-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pv/1.7.24-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/py-cpuinfo/", "title": "py-cpuinfo", "text": ""}, {"location": "available_software/detail/py-cpuinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py-cpuinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py-cpuinfo, load one of these modules using a module load command like:

                  module load py-cpuinfo/9.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py-cpuinfo/9.0.0-GCCcore-12.2.0 x x x x x x py-cpuinfo/9.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/py3Dmol/", "title": "py3Dmol", "text": ""}, {"location": "available_software/detail/py3Dmol/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which py3Dmol installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using py3Dmol, load one of these modules using a module load command like:

                  module load py3Dmol/2.0.1.post1-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty py3Dmol/2.0.1.post1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pyBigWig/", "title": "pyBigWig", "text": ""}, {"location": "available_software/detail/pyBigWig/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyBigWig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyBigWig, load one of these modules using a module load command like:

                  module load pyBigWig/0.3.18-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyBigWig/0.3.18-foss-2022a x x x x x x pyBigWig/0.3.18-foss-2021b x x x - x x pyBigWig/0.3.18-GCCcore-10.2.0 - x x x x x pyBigWig/0.3.17-GCCcore-9.3.0 - - x - x x"}, {"location": "available_software/detail/pyEGA3/", "title": "pyEGA3", "text": ""}, {"location": "available_software/detail/pyEGA3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyEGA3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyEGA3, load one of these modules using a module load command like:

                  module load pyEGA3/5.0.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyEGA3/5.0.2-GCCcore-12.3.0 x x x x x x pyEGA3/4.0.0-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/pyGenomeTracks/", "title": "pyGenomeTracks", "text": ""}, {"location": "available_software/detail/pyGenomeTracks/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyGenomeTracks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyGenomeTracks, load one of these modules using a module load command like:

                  module load pyGenomeTracks/3.8-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyGenomeTracks/3.8-foss-2022a x x x x x x pyGenomeTracks/3.7-foss-2021b x x x - x x"}, {"location": "available_software/detail/pySCENIC/", "title": "pySCENIC", "text": ""}, {"location": "available_software/detail/pySCENIC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pySCENIC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pySCENIC, load one of these modules using a module load command like:

                  module load pySCENIC/0.10.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pySCENIC/0.10.3-intel-2020a-Python-3.8.2 - x x - x x pySCENIC/0.10.3-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyWannier90/", "title": "pyWannier90", "text": ""}, {"location": "available_software/detail/pyWannier90/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyWannier90 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyWannier90, load one of these modules using a module load command like:

                  module load pyWannier90/2021-12-07-gomkl-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyWannier90/2021-12-07-gomkl-2021a x x x - x x pyWannier90/2021-12-07-foss-2021a x x x - x x"}, {"location": "available_software/detail/pybedtools/", "title": "pybedtools", "text": ""}, {"location": "available_software/detail/pybedtools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybedtools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybedtools, load one of these modules using a module load command like:

                  module load pybedtools/0.9.0-GCC-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybedtools/0.9.0-GCC-12.2.0 x x x x x x pybedtools/0.9.0-GCC-11.3.0 x x x x x x pybedtools/0.8.2-GCC-11.2.0-Python-2.7.18 x x x x x x pybedtools/0.8.2-GCC-11.2.0 x x x - x x pybedtools/0.8.2-GCC-10.2.0-Python-2.7.18 - x x x x x pybedtools/0.8.2-GCC-10.2.0 - x x x x x pybedtools/0.8.1-foss-2019b - x x - x x"}, {"location": "available_software/detail/pybind11/", "title": "pybind11", "text": ""}, {"location": "available_software/detail/pybind11/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pybind11, load one of these modules using a module load command like:

                  module load pybind11/2.11.1-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pybind11/2.11.1-GCCcore-13.2.0 x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x pybind11/2.9.2-GCCcore-11.3.0 x x x x x x pybind11/2.7.1-GCCcore-11.2.0-Python-2.7.18 x x x x x x pybind11/2.7.1-GCCcore-11.2.0 x x x x x x pybind11/2.6.2-GCCcore-10.3.0 x x x x x x pybind11/2.6.0-GCCcore-10.2.0 x x x x x x pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2 x x x x x x pybind11/2.4.3-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycocotools/", "title": "pycocotools", "text": ""}, {"location": "available_software/detail/pycocotools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycocotools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycocotools, load one of these modules using a module load command like:

                  module load pycocotools/2.0.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycocotools/2.0.4-foss-2021a x x x - x x pycocotools/2.0.1-foss-2019b-Python-3.7.4 - x x - x x pycocotools/2.0.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pycodestyle/", "title": "pycodestyle", "text": ""}, {"location": "available_software/detail/pycodestyle/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pycodestyle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pycodestyle, load one of these modules using a module load command like:

                  module load pycodestyle/2.11.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pycodestyle/2.11.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/pydantic/", "title": "pydantic", "text": ""}, {"location": "available_software/detail/pydantic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydantic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydantic, load one of these modules using a module load command like:

                  module load pydantic/2.5.3-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydantic/2.5.3-GCCcore-12.3.0 x x x x x x pydantic/2.5.3-GCCcore-12.2.0 x x x x x x pydantic/1.10.13-GCCcore-12.3.0 x x x x x x pydantic/1.10.4-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/pydicom/", "title": "pydicom", "text": ""}, {"location": "available_software/detail/pydicom/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydicom installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydicom, load one of these modules using a module load command like:

                  module load pydicom/2.3.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydicom/2.3.0-GCCcore-11.3.0 x x x x x x pydicom/2.2.2-GCCcore-10.3.0 x x x - x x pydicom/2.1.2-GCCcore-10.2.0 x x x x x x pydicom/1.4.2-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/pydot/", "title": "pydot", "text": ""}, {"location": "available_software/detail/pydot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pydot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pydot, load one of these modules using a module load command like:

                  module load pydot/1.4.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pydot/1.4.2-GCCcore-11.3.0 x x x x x x pydot/1.4.2-GCCcore-11.2.0 x x x x x x pydot/1.4.2-GCCcore-10.3.0 x x x x x x pydot/1.4.2-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/pyfaidx/", "title": "pyfaidx", "text": ""}, {"location": "available_software/detail/pyfaidx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfaidx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfaidx, load one of these modules using a module load command like:

                  module load pyfaidx/0.7.2.1-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x pyfaidx/0.7.1-GCCcore-11.3.0 x x x x x x pyfaidx/0.7.0-GCCcore-11.2.0 x x x - x x pyfaidx/0.6.3.1-GCCcore-10.3.0 x x x - x x pyfaidx/0.5.9.5-GCCcore-10.2.0 - x x x x x pyfaidx/0.5.9.5-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyfasta/", "title": "pyfasta", "text": ""}, {"location": "available_software/detail/pyfasta/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyfasta installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyfasta, load one of these modules using a module load command like:

                  module load pyfasta/0.5.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyfasta/0.5.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygmo/", "title": "pygmo", "text": ""}, {"location": "available_software/detail/pygmo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygmo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygmo, load one of these modules using a module load command like:

                  module load pygmo/2.16.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygmo/2.16.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/pygraphviz/", "title": "pygraphviz", "text": ""}, {"location": "available_software/detail/pygraphviz/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pygraphviz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pygraphviz, load one of these modules using a module load command like:

                  module load pygraphviz/1.11-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pygraphviz/1.11-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pyiron/", "title": "pyiron", "text": ""}, {"location": "available_software/detail/pyiron/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyiron installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyiron, load one of these modules using a module load command like:

                  module load pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyiron/0.2.6-hpcugent-2023-intel-2020a-Python-3.8.2 x x x x x x pyiron/0.2.6-hpcugent-2022c-intel-2020a-Python-3.8.2 - - - - - x pyiron/0.2.6-hpcugent-2022b-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2022-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2021-intel-2020a-Python-3.8.2 - x x - x - pyiron/0.2.6-hpcugent-2020-intel-2020a-Python-3.8.2 - x x - x -"}, {"location": "available_software/detail/pymatgen/", "title": "pymatgen", "text": ""}, {"location": "available_software/detail/pymatgen/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymatgen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymatgen, load one of these modules using a module load command like:

                  module load pymatgen/2022.9.21-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymatgen/2022.9.21-foss-2022a x x x - x x pymatgen/2022.0.4-foss-2020b - x x x x x"}, {"location": "available_software/detail/pymbar/", "title": "pymbar", "text": ""}, {"location": "available_software/detail/pymbar/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymbar installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymbar, load one of these modules using a module load command like:

                  module load pymbar/3.0.3-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymbar/3.0.3-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pymca/", "title": "pymca", "text": ""}, {"location": "available_software/detail/pymca/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pymca installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pymca, load one of these modules using a module load command like:

                  module load pymca/5.6.3-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pymca/5.6.3-foss-2020b - x x x x x"}, {"location": "available_software/detail/pyobjcryst/", "title": "pyobjcryst", "text": ""}, {"location": "available_software/detail/pyobjcryst/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyobjcryst installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyobjcryst, load one of these modules using a module load command like:

                  module load pyobjcryst/2.2.1-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyobjcryst/2.2.1-intel-2020a-Python-3.8.2 - - - - - x pyobjcryst/2.2.1-foss-2021b x x x - x x pyobjcryst/2.1.0.post2-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/pyodbc/", "title": "pyodbc", "text": ""}, {"location": "available_software/detail/pyodbc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyodbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyodbc, load one of these modules using a module load command like:

                  module load pyodbc/4.0.39-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyodbc/4.0.39-foss-2022b x x x x x x"}, {"location": "available_software/detail/pyparsing/", "title": "pyparsing", "text": ""}, {"location": "available_software/detail/pyparsing/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyparsing installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyparsing, load one of these modules using a module load command like:

                  module load pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyparsing/2.4.6-GCCcore-8.3.0-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/pyproj/", "title": "pyproj", "text": ""}, {"location": "available_software/detail/pyproj/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyproj installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyproj, load one of these modules using a module load command like:

                  module load pyproj/3.6.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyproj/3.6.0-GCCcore-12.3.0 x x x x x x pyproj/3.5.0-GCCcore-12.2.0 x x x x x x pyproj/3.4.0-GCCcore-11.3.0 x x x x x x pyproj/3.3.1-GCCcore-11.2.0 x x x - x x pyproj/3.0.1-GCCcore-10.2.0 - x x x x x pyproj/2.6.1.post1-GCCcore-9.3.0-Python-3.8.2 - x x - x x pyproj/2.4.2-GCCcore-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyro-api/", "title": "pyro-api", "text": ""}, {"location": "available_software/detail/pyro-api/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-api, load one of these modules using a module load command like:

                  module load pyro-api/0.1.2-fosscuda-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-api/0.1.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pyro-ppl/", "title": "pyro-ppl", "text": ""}, {"location": "available_software/detail/pyro-ppl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyro-ppl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyro-ppl, load one of these modules using a module load command like:

                  module load pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyro-ppl/1.8.4-foss-2022a-CUDA-11.7.0 x - x - x - pyro-ppl/1.8.4-foss-2022a x x x x x x pyro-ppl/1.5.2-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/pysamstats/", "title": "pysamstats", "text": ""}, {"location": "available_software/detail/pysamstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysamstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysamstats, load one of these modules using a module load command like:

                  module load pysamstats/1.1.2-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysamstats/1.1.2-foss-2020b - x x x x x"}, {"location": "available_software/detail/pysndfx/", "title": "pysndfx", "text": ""}, {"location": "available_software/detail/pysndfx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pysndfx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pysndfx, load one of these modules using a module load command like:

                  module load pysndfx/0.3.6-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pysndfx/0.3.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pyspoa/", "title": "pyspoa", "text": ""}, {"location": "available_software/detail/pyspoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pyspoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pyspoa, load one of these modules using a module load command like:

                  module load pyspoa/0.0.9-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pyspoa/0.0.9-GCC-11.3.0 x x x x x x pyspoa/0.0.8-GCC-11.2.0 x x x - x x pyspoa/0.0.8-GCC-10.3.0 x x x - x x pyspoa/0.0.8-GCC-10.2.0 - x x x x x pyspoa/0.0.4-GCC-8.3.0-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pytest-flakefinder/", "title": "pytest-flakefinder", "text": ""}, {"location": "available_software/detail/pytest-flakefinder/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-flakefinder, load one of these modules using a module load command like:

                  module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pytest-rerunfailures/", "title": "pytest-rerunfailures", "text": ""}, {"location": "available_software/detail/pytest-rerunfailures/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-rerunfailures, load one of these modules using a module load command like:

                  module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x pytest-rerunfailures/12.0-GCCcore-12.2.0 x x x x x x pytest-rerunfailures/11.1-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-shard/", "title": "pytest-shard", "text": ""}, {"location": "available_software/detail/pytest-shard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-shard, load one of these modules using a module load command like:

                  module load pytest-shard/0.1.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x pytest-shard/0.1.2-GCCcore-12.2.0 x x x x x x pytest-shard/0.1.2-GCCcore-11.3.0 x - x - x -"}, {"location": "available_software/detail/pytest-xdist/", "title": "pytest-xdist", "text": ""}, {"location": "available_software/detail/pytest-xdist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest-xdist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest-xdist, load one of these modules using a module load command like:

                  module load pytest-xdist/3.3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest-xdist/3.3.1-GCCcore-12.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.3.0 x x x x x x pytest-xdist/2.5.0-GCCcore-11.2.0 x - x - x - pytest-xdist/2.3.0-GCCcore-10.3.0 x x x x x x pytest-xdist/2.3.0-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/pytest/", "title": "pytest", "text": ""}, {"location": "available_software/detail/pytest/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pytest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pytest, load one of these modules using a module load command like:

                  module load pytest/7.4.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pytest/7.4.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/pythermalcomfort/", "title": "pythermalcomfort", "text": ""}, {"location": "available_software/detail/pythermalcomfort/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythermalcomfort installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythermalcomfort, load one of these modules using a module load command like:

                  module load pythermalcomfort/2.8.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythermalcomfort/2.8.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-Levenshtein/", "title": "python-Levenshtein", "text": ""}, {"location": "available_software/detail/python-Levenshtein/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-Levenshtein installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-Levenshtein, load one of these modules using a module load command like:

                  module load python-Levenshtein/0.12.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-Levenshtein/0.12.1-foss-2020b - x x x x x python-Levenshtein/0.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-igraph/", "title": "python-igraph", "text": ""}, {"location": "available_software/detail/python-igraph/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-igraph installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-igraph, load one of these modules using a module load command like:

                  module load python-igraph/0.11.4-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-igraph/0.11.4-foss-2023a x x x x x x python-igraph/0.10.3-foss-2022a x x x x x x python-igraph/0.9.8-foss-2021b x x x x x x python-igraph/0.9.6-foss-2021a x x x x x x python-igraph/0.9.0-fosscuda-2020b - - - - x - python-igraph/0.9.0-foss-2020b - x x x x x python-igraph/0.8.0-foss-2020a - x x - x x"}, {"location": "available_software/detail/python-irodsclient/", "title": "python-irodsclient", "text": ""}, {"location": "available_software/detail/python-irodsclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-irodsclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-irodsclient, load one of these modules using a module load command like:

                  module load python-irodsclient/1.1.4-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-irodsclient/1.1.4-GCCcore-11.2.0 x x x - x x python-irodsclient/1.1.4-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-isal/", "title": "python-isal", "text": ""}, {"location": "available_software/detail/python-isal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-isal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-isal, load one of these modules using a module load command like:

                  module load python-isal/1.1.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-isal/1.1.0-GCCcore-11.3.0 x x x x x x python-isal/0.11.1-GCCcore-11.2.0 x x x - x x python-isal/0.11.1-GCCcore-10.2.0 - x x x x x python-isal/0.11.0-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/python-louvain/", "title": "python-louvain", "text": ""}, {"location": "available_software/detail/python-louvain/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-louvain installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-louvain, load one of these modules using a module load command like:

                  module load python-louvain/0.16-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-louvain/0.16-foss-2022a x x x x x x"}, {"location": "available_software/detail/python-parasail/", "title": "python-parasail", "text": ""}, {"location": "available_software/detail/python-parasail/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-parasail installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-parasail, load one of these modules using a module load command like:

                  module load python-parasail/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-parasail/1.3.3-foss-2022a x x x x x x python-parasail/1.2.4-fosscuda-2020b - - - - x - python-parasail/1.2.4-foss-2021b x x x - x x python-parasail/1.2.4-foss-2021a x x x - x x python-parasail/1.2.2-intel-2020a-Python-3.8.2 - x x - x x python-parasail/1.2-intel-2019b-Python-3.7.4 - x x - x x python-parasail/1.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/python-telegram-bot/", "title": "python-telegram-bot", "text": ""}, {"location": "available_software/detail/python-telegram-bot/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-telegram-bot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-telegram-bot, load one of these modules using a module load command like:

                  module load python-telegram-bot/20.0a0-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-telegram-bot/20.0a0-GCCcore-10.2.0 x x x - x x"}, {"location": "available_software/detail/python-weka-wrapper3/", "title": "python-weka-wrapper3", "text": ""}, {"location": "available_software/detail/python-weka-wrapper3/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which python-weka-wrapper3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using python-weka-wrapper3, load one of these modules using a module load command like:

                  module load python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty python-weka-wrapper3/0.1.11-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/pythran/", "title": "pythran", "text": ""}, {"location": "available_software/detail/pythran/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which pythran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using pythran, load one of these modules using a module load command like:

                  module load pythran/0.9.4.post1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty pythran/0.9.4.post1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qcat/", "title": "qcat", "text": ""}, {"location": "available_software/detail/qcat/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qcat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qcat, load one of these modules using a module load command like:

                  module load qcat/1.1.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qcat/1.1.0-intel-2020a-Python-3.8.2 - x x - x x qcat/1.1.0-intel-2019b-Python-3.7.4 - x x - x x qcat/1.1.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/qnorm/", "title": "qnorm", "text": ""}, {"location": "available_software/detail/qnorm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which qnorm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using qnorm, load one of these modules using a module load command like:

                  module load qnorm/0.8.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty qnorm/0.8.1-foss-2022a x x x x x x"}, {"location": "available_software/detail/rMATS-turbo/", "title": "rMATS-turbo", "text": ""}, {"location": "available_software/detail/rMATS-turbo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rMATS-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rMATS-turbo, load one of these modules using a module load command like:

                  module load rMATS-turbo/4.1.1-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rMATS-turbo/4.1.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/radian/", "title": "radian", "text": ""}, {"location": "available_software/detail/radian/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which radian installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using radian, load one of these modules using a module load command like:

                  module load radian/0.6.9-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty radian/0.6.9-foss-2022b x x x x x x"}, {"location": "available_software/detail/rasterio/", "title": "rasterio", "text": ""}, {"location": "available_software/detail/rasterio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterio, load one of these modules using a module load command like:

                  module load rasterio/1.3.8-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterio/1.3.8-foss-2022b x x x x x x rasterio/1.2.10-foss-2021b x x x - x x rasterio/1.1.7-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rasterstats/", "title": "rasterstats", "text": ""}, {"location": "available_software/detail/rasterstats/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rasterstats installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rasterstats, load one of these modules using a module load command like:

                  module load rasterstats/0.15.0-foss-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rasterstats/0.15.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/rclone/", "title": "rclone", "text": ""}, {"location": "available_software/detail/rclone/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rclone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rclone, load one of these modules using a module load command like:

                  module load rclone/1.65.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rclone/1.65.2 x x x x x x"}, {"location": "available_software/detail/re2c/", "title": "re2c", "text": ""}, {"location": "available_software/detail/re2c/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using re2c, load one of these modules using a module load command like:

                  module load re2c/3.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty re2c/3.1-GCCcore-12.3.0 x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x re2c/2.2-GCCcore-11.3.0 x x x x x x re2c/2.2-GCCcore-11.2.0 x x x x x x re2c/2.1.1-GCCcore-10.3.0 x x x x x x re2c/2.0.3-GCCcore-10.2.0 x x x x x x re2c/1.3-GCCcore-9.3.0 - x x - x x re2c/1.2.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/redis-py/", "title": "redis-py", "text": ""}, {"location": "available_software/detail/redis-py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which redis-py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using redis-py, load one of these modules using a module load command like:

                  module load redis-py/4.5.1-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty redis-py/4.5.1-foss-2022a x x x x x x redis-py/4.3.3-foss-2021b x x x - x x redis-py/4.3.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/regionmask/", "title": "regionmask", "text": ""}, {"location": "available_software/detail/regionmask/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which regionmask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using regionmask, load one of these modules using a module load command like:

                  module load regionmask/0.10.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty regionmask/0.10.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/request/", "title": "request", "text": ""}, {"location": "available_software/detail/request/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which request installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using request, load one of these modules using a module load command like:

                  module load request/2.88.1-fosscuda-2020b-nodejs-12.19.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty request/2.88.1-fosscuda-2020b-nodejs-12.19.0 - - - - x -"}, {"location": "available_software/detail/rethinking/", "title": "rethinking", "text": ""}, {"location": "available_software/detail/rethinking/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rethinking installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rethinking, load one of these modules using a module load command like:

                  module load rethinking/2.40-20230914-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rethinking/2.40-20230914-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/rgdal/", "title": "rgdal", "text": ""}, {"location": "available_software/detail/rgdal/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgdal installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgdal, load one of these modules using a module load command like:

                  module load rgdal/1.5-23-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgdal/1.5-23-foss-2021a-R-4.1.0 - x x - x x rgdal/1.5-23-foss-2020b-R-4.0.4 - x x x x x rgdal/1.5-16-foss-2020a-R-4.0.0 - x x - x x rgdal/1.4-8-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rgeos/", "title": "rgeos", "text": ""}, {"location": "available_software/detail/rgeos/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rgeos installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rgeos, load one of these modules using a module load command like:

                  module load rgeos/0.5-5-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rgeos/0.5-5-foss-2021a-R-4.1.0 - x x - x x rgeos/0.5-5-foss-2020a-R-4.0.0 - x x - x x rgeos/0.5-2-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rickflow/", "title": "rickflow", "text": ""}, {"location": "available_software/detail/rickflow/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rickflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rickflow, load one of these modules using a module load command like:

                  module load rickflow/0.7.0-intel-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rickflow/0.7.0-intel-2019b-Python-3.7.4 - x x - x x rickflow/0.7.0-20200529-intel-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rioxarray/", "title": "rioxarray", "text": ""}, {"location": "available_software/detail/rioxarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rioxarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rioxarray, load one of these modules using a module load command like:

                  module load rioxarray/0.11.1-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rioxarray/0.11.1-foss-2021b x x x - x x"}, {"location": "available_software/detail/rjags/", "title": "rjags", "text": ""}, {"location": "available_software/detail/rjags/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rjags installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rjags, load one of these modules using a module load command like:

                  module load rjags/4-13-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rjags/4-13-foss-2022a-R-4.2.1 x x x x x x rjags/4-13-foss-2021b-R-4.2.0 x x x - x x rjags/4-10-foss-2020b-R-4.0.3 x x x x x x"}, {"location": "available_software/detail/rmarkdown/", "title": "rmarkdown", "text": ""}, {"location": "available_software/detail/rmarkdown/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rmarkdown installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rmarkdown, load one of these modules using a module load command like:

                  module load rmarkdown/2.20-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rmarkdown/2.20-foss-2021a-R-4.1.0 - x x x x x"}, {"location": "available_software/detail/rpy2/", "title": "rpy2", "text": ""}, {"location": "available_software/detail/rpy2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rpy2, load one of these modules using a module load command like:

                  module load rpy2/3.5.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rpy2/3.5.10-foss-2022a x x x x x x rpy2/3.4.5-foss-2021b x x x x x x rpy2/3.4.5-foss-2021a x x x x x x rpy2/3.2.6-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/rstanarm/", "title": "rstanarm", "text": ""}, {"location": "available_software/detail/rstanarm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstanarm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstanarm, load one of these modules using a module load command like:

                  module load rstanarm/2.19.3-foss-2019b-R-3.6.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstanarm/2.19.3-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/rstudio/", "title": "rstudio", "text": ""}, {"location": "available_software/detail/rstudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which rstudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using rstudio, load one of these modules using a module load command like:

                  module load rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty rstudio/1.3.959-foss-2020a-Java-11-R-4.0.0 - x - - - -"}, {"location": "available_software/detail/ruamel.yaml/", "title": "ruamel.yaml", "text": ""}, {"location": "available_software/detail/ruamel.yaml/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruamel.yaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruamel.yaml, load one of these modules using a module load command like:

                  module load ruamel.yaml/0.17.32-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruamel.yaml/0.17.32-GCCcore-12.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.3.0 x x x x x x ruamel.yaml/0.17.21-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/ruffus/", "title": "ruffus", "text": ""}, {"location": "available_software/detail/ruffus/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which ruffus installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using ruffus, load one of these modules using a module load command like:

                  module load ruffus/2.8.4-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty ruffus/2.8.4-foss-2021b x x x x x x"}, {"location": "available_software/detail/s3fs/", "title": "s3fs", "text": ""}, {"location": "available_software/detail/s3fs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which s3fs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using s3fs, load one of these modules using a module load command like:

                  module load s3fs/2023.12.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty s3fs/2023.12.2-foss-2023a x x x x x x"}, {"location": "available_software/detail/samblaster/", "title": "samblaster", "text": ""}, {"location": "available_software/detail/samblaster/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samblaster installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samblaster, load one of these modules using a module load command like:

                  module load samblaster/0.1.26-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samblaster/0.1.26-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/samclip/", "title": "samclip", "text": ""}, {"location": "available_software/detail/samclip/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which samclip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using samclip, load one of these modules using a module load command like:

                  module load samclip/0.4.0-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty samclip/0.4.0-GCCcore-11.2.0 x x x - x x samclip/0.4.0-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/sansa/", "title": "sansa", "text": ""}, {"location": "available_software/detail/sansa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sansa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sansa, load one of these modules using a module load command like:

                  module load sansa/0.0.7-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sansa/0.0.7-gompi-2020b - x x x x x"}, {"location": "available_software/detail/sbt/", "title": "sbt", "text": ""}, {"location": "available_software/detail/sbt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sbt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sbt, load one of these modules using a module load command like:

                  module load sbt/1.3.13-Java-1.8\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sbt/1.3.13-Java-1.8 - - x - x -"}, {"location": "available_software/detail/scArches/", "title": "scArches", "text": ""}, {"location": "available_software/detail/scArches/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scArches installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scArches, load one of these modules using a module load command like:

                  module load scArches/0.5.6-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scArches/0.5.6-foss-2021a-CUDA-11.3.1 x - - - x - scArches/0.5.6-foss-2021a x x x x x x"}, {"location": "available_software/detail/scCODA/", "title": "scCODA", "text": ""}, {"location": "available_software/detail/scCODA/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scCODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scCODA, load one of these modules using a module load command like:

                  module load scCODA/0.1.9-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scCODA/0.1.9-foss-2021a x x x x x x"}, {"location": "available_software/detail/scGeneFit/", "title": "scGeneFit", "text": ""}, {"location": "available_software/detail/scGeneFit/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scGeneFit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scGeneFit, load one of these modules using a module load command like:

                  module load scGeneFit/1.0.2-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scGeneFit/1.0.2-foss-2021a - x x - x x"}, {"location": "available_software/detail/scHiCExplorer/", "title": "scHiCExplorer", "text": ""}, {"location": "available_software/detail/scHiCExplorer/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scHiCExplorer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scHiCExplorer, load one of these modules using a module load command like:

                  module load scHiCExplorer/7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scHiCExplorer/7-foss-2022a x x x x x x"}, {"location": "available_software/detail/scPred/", "title": "scPred", "text": ""}, {"location": "available_software/detail/scPred/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scPred installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scPred, load one of these modules using a module load command like:

                  module load scPred/1.9.2-foss-2021b-R-4.1.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scPred/1.9.2-foss-2021b-R-4.1.2 x x x - x x"}, {"location": "available_software/detail/scVelo/", "title": "scVelo", "text": ""}, {"location": "available_software/detail/scVelo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scVelo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scVelo, load one of these modules using a module load command like:

                  module load scVelo/0.2.5-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scVelo/0.2.5-foss-2022a x x x x x x scVelo/0.2.3-foss-2021a - x x - x x scVelo/0.1.24-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scanpy/", "title": "scanpy", "text": ""}, {"location": "available_software/detail/scanpy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scanpy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scanpy, load one of these modules using a module load command like:

                  module load scanpy/1.9.8-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scanpy/1.9.8-foss-2023a x x x x x x scanpy/1.9.1-foss-2022a x x x x x x scanpy/1.9.1-foss-2021b x x x x x x scanpy/1.8.2-foss-2021b x x x x x x scanpy/1.8.1-foss-2021a x x x x x x scanpy/1.8.1-foss-2020b - x x x x x"}, {"location": "available_software/detail/sceasy/", "title": "sceasy", "text": ""}, {"location": "available_software/detail/sceasy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sceasy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sceasy, load one of these modules using a module load command like:

                  module load sceasy/0.0.7-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sceasy/0.0.7-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/scib-metrics/", "title": "scib-metrics", "text": ""}, {"location": "available_software/detail/scib-metrics/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib-metrics installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib-metrics, load one of these modules using a module load command like:

                  module load scib-metrics/0.3.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib-metrics/0.3.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scib/", "title": "scib", "text": ""}, {"location": "available_software/detail/scib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scib, load one of these modules using a module load command like:

                  module load scib/1.1.3-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scib/1.1.3-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-bio/", "title": "scikit-bio", "text": ""}, {"location": "available_software/detail/scikit-bio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-bio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-bio, load one of these modules using a module load command like:

                  module load scikit-bio/0.5.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-bio/0.5.7-foss-2022a x x x x x x scikit-bio/0.5.7-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-build/", "title": "scikit-build", "text": ""}, {"location": "available_software/detail/scikit-build/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-build, load one of these modules using a module load command like:

                  module load scikit-build/0.17.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x scikit-build/0.17.2-GCCcore-12.2.0 x x x x x x scikit-build/0.15.0-GCCcore-11.3.0 x x x x x x scikit-build/0.11.1-fosscuda-2020b x - - - x - scikit-build/0.11.1-foss-2020b - x x x x x scikit-build/0.11.1-GCCcore-10.3.0 x - x - x -"}, {"location": "available_software/detail/scikit-extremes/", "title": "scikit-extremes", "text": ""}, {"location": "available_software/detail/scikit-extremes/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-extremes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-extremes, load one of these modules using a module load command like:

                  module load scikit-extremes/2022.4.10-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-extremes/2022.4.10-foss-2022a x x x x x x"}, {"location": "available_software/detail/scikit-image/", "title": "scikit-image", "text": ""}, {"location": "available_software/detail/scikit-image/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-image installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-image, load one of these modules using a module load command like:

                  module load scikit-image/0.19.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-image/0.19.3-foss-2022a x x x x x x scikit-image/0.19.1-foss-2021b x x x x x x scikit-image/0.18.3-foss-2021a x x x - x x scikit-image/0.18.1-fosscuda-2020b x - - - x - scikit-image/0.18.1-foss-2020b - x x x x x scikit-image/0.17.1-foss-2020a-Python-3.8.2 - x x - x x scikit-image/0.16.2-intel-2019b-Python-3.7.4 - x x - x x scikit-image/0.16.2-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scikit-learn/", "title": "scikit-learn", "text": ""}, {"location": "available_software/detail/scikit-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-learn, load one of these modules using a module load command like:

                  module load scikit-learn/1.4.0-gfbf-2023b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-learn/1.4.0-gfbf-2023b x x x x x x scikit-learn/1.3.2-gfbf-2023b x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x scikit-learn/1.2.1-gfbf-2022b x x x x x x scikit-learn/1.1.2-intel-2022a x x x x x x scikit-learn/1.1.2-foss-2022a x x x x x x scikit-learn/1.0.1-intel-2021b x x x - x x scikit-learn/1.0.1-foss-2021b x x x x x x scikit-learn/0.24.2-foss-2021a x x x x x x scikit-learn/0.23.2-intel-2020b - x x - x x scikit-learn/0.23.2-fosscuda-2020b x - - - x - scikit-learn/0.23.2-foss-2020b - x x x x x scikit-learn/0.23.1-intel-2020a-Python-3.8.2 x x x x x x scikit-learn/0.23.1-foss-2020a-Python-3.8.2 - x x - x x scikit-learn/0.21.3-intel-2019b-Python-3.7.4 - x x - x x scikit-learn/0.21.3-foss-2019b-Python-3.7.4 x x x - x x scikit-learn/0.20.4-intel-2019b-Python-2.7.16 - x x - x x scikit-learn/0.20.4-foss-2021b-Python-2.7.18 x x x x x x scikit-learn/0.20.4-foss-2020b-Python-2.7.18 - x x x x x"}, {"location": "available_software/detail/scikit-misc/", "title": "scikit-misc", "text": ""}, {"location": "available_software/detail/scikit-misc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-misc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-misc, load one of these modules using a module load command like:

                  module load scikit-misc/0.1.4-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-misc/0.1.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/scikit-optimize/", "title": "scikit-optimize", "text": ""}, {"location": "available_software/detail/scikit-optimize/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scikit-optimize installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scikit-optimize, load one of these modules using a module load command like:

                  module load scikit-optimize/0.9.0-foss-2021a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scikit-optimize/0.9.0-foss-2021a x x x - x x"}, {"location": "available_software/detail/scipy/", "title": "scipy", "text": ""}, {"location": "available_software/detail/scipy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scipy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scipy, load one of these modules using a module load command like:

                  module load scipy/1.4.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scipy/1.4.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/scrublet/", "title": "scrublet", "text": ""}, {"location": "available_software/detail/scrublet/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scrublet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scrublet, load one of these modules using a module load command like:

                  module load scrublet/0.2.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scrublet/0.2.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/scvi-tools/", "title": "scvi-tools", "text": ""}, {"location": "available_software/detail/scvi-tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which scvi-tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using scvi-tools, load one of these modules using a module load command like:

                  module load scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty scvi-tools/0.16.4-foss-2021a-CUDA-11.3.1 x - - - x - scvi-tools/0.16.4-foss-2021a x x x x x x"}, {"location": "available_software/detail/segemehl/", "title": "segemehl", "text": ""}, {"location": "available_software/detail/segemehl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segemehl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segemehl, load one of these modules using a module load command like:

                  module load segemehl/0.3.4-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segemehl/0.3.4-GCC-11.2.0 x x x x x x segemehl/0.3.4-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/segmentation-models/", "title": "segmentation-models", "text": ""}, {"location": "available_software/detail/segmentation-models/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which segmentation-models installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using segmentation-models, load one of these modules using a module load command like:

                  module load segmentation-models/1.0.1-foss-2019b-Python-3.7.4\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty segmentation-models/1.0.1-foss-2019b-Python-3.7.4 - x - - - x"}, {"location": "available_software/detail/semla/", "title": "semla", "text": ""}, {"location": "available_software/detail/semla/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which semla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using semla, load one of these modules using a module load command like:

                  module load semla/1.1.6-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty semla/1.1.6-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/seqtk/", "title": "seqtk", "text": ""}, {"location": "available_software/detail/seqtk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which seqtk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using seqtk, load one of these modules using a module load command like:

                  module load seqtk/1.4-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty seqtk/1.4-GCC-12.3.0 x x x x x x seqtk/1.3-GCC-11.2.0 x x x - x x seqtk/1.3-GCC-10.2.0 - x x x x x seqtk/1.3-GCC-8.3.0 - x x - x x"}, {"location": "available_software/detail/setuptools-rust/", "title": "setuptools-rust", "text": ""}, {"location": "available_software/detail/setuptools-rust/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools-rust, load one of these modules using a module load command like:

                  module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/setuptools/", "title": "setuptools", "text": ""}, {"location": "available_software/detail/setuptools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which setuptools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using setuptools, load one of these modules using a module load command like:

                  module load setuptools/64.0.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty setuptools/64.0.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/sf/", "title": "sf", "text": ""}, {"location": "available_software/detail/sf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sf, load one of these modules using a module load command like:

                  module load sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sf/0.9-5-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/shovill/", "title": "shovill", "text": ""}, {"location": "available_software/detail/shovill/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which shovill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using shovill, load one of these modules using a module load command like:

                  module load shovill/1.1.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty shovill/1.1.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/silhouetteRank/", "title": "silhouetteRank", "text": ""}, {"location": "available_software/detail/silhouetteRank/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silhouetteRank installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silhouetteRank, load one of these modules using a module load command like:

                  module load silhouetteRank/1.0.5.13-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silhouetteRank/1.0.5.13-foss-2022a x x x x x x"}, {"location": "available_software/detail/silx/", "title": "silx", "text": ""}, {"location": "available_software/detail/silx/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which silx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using silx, load one of these modules using a module load command like:

                  module load silx/0.14.0-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty silx/0.14.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/slepc4py/", "title": "slepc4py", "text": ""}, {"location": "available_software/detail/slepc4py/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slepc4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slepc4py, load one of these modules using a module load command like:

                  module load slepc4py/3.17.2-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slepc4py/3.17.2-foss-2022a x x x x x x slepc4py/3.15.1-foss-2021a - x x - x x slepc4py/3.12.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/slow5tools/", "title": "slow5tools", "text": ""}, {"location": "available_software/detail/slow5tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slow5tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slow5tools, load one of these modules using a module load command like:

                  module load slow5tools/0.4.0-gompi-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slow5tools/0.4.0-gompi-2021b x x x - x x"}, {"location": "available_software/detail/slurm-drmaa/", "title": "slurm-drmaa", "text": ""}, {"location": "available_software/detail/slurm-drmaa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which slurm-drmaa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using slurm-drmaa, load one of these modules using a module load command like:

                  module load slurm-drmaa/1.1.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty slurm-drmaa/1.1.3-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/smfishHmrf/", "title": "smfishHmrf", "text": ""}, {"location": "available_software/detail/smfishHmrf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smfishHmrf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smfishHmrf, load one of these modules using a module load command like:

                  module load smfishHmrf/1.3.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smfishHmrf/1.3.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/smithwaterman/", "title": "smithwaterman", "text": ""}, {"location": "available_software/detail/smithwaterman/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smithwaterman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smithwaterman, load one of these modules using a module load command like:

                  module load smithwaterman/20160702-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smithwaterman/20160702-GCCcore-11.3.0 x x x x x x smithwaterman/20160702-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/smooth-topk/", "title": "smooth-topk", "text": ""}, {"location": "available_software/detail/smooth-topk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which smooth-topk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using smooth-topk, load one of these modules using a module load command like:

                  module load smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty smooth-topk/1.0-20210817-foss-2021a-CUDA-11.3.1 x - - - x - smooth-topk/1.0-20210817-foss-2021a - x x - x x"}, {"location": "available_software/detail/snakemake/", "title": "snakemake", "text": ""}, {"location": "available_software/detail/snakemake/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snakemake, load one of these modules using a module load command like:

                  module load snakemake/8.4.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snakemake/8.4.2-foss-2023a x x x x x x snakemake/7.32.3-foss-2022b x x x x x x snakemake/7.22.0-foss-2022a x x x x x x snakemake/7.18.2-foss-2021b x x x - x x snakemake/6.10.0-foss-2021b x x x - x x snakemake/6.1.0-foss-2020b - x x x x x snakemake/5.26.1-intel-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/snappy/", "title": "snappy", "text": ""}, {"location": "available_software/detail/snappy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snappy, load one of these modules using a module load command like:

                  module load snappy/1.1.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snappy/1.1.10-GCCcore-12.3.0 x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x snappy/1.1.9-GCCcore-11.3.0 x x x x x x snappy/1.1.9-GCCcore-11.2.0 x x x x x x snappy/1.1.8-GCCcore-10.3.0 x x x x x x snappy/1.1.8-GCCcore-10.2.0 x x x x x x snappy/1.1.8-GCCcore-9.3.0 - x x - x x snappy/1.1.7-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/snippy/", "title": "snippy", "text": ""}, {"location": "available_software/detail/snippy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snippy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snippy, load one of these modules using a module load command like:

                  module load snippy/4.6.0-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snippy/4.6.0-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/snp-sites/", "title": "snp-sites", "text": ""}, {"location": "available_software/detail/snp-sites/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snp-sites installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snp-sites, load one of these modules using a module load command like:

                  module load snp-sites/2.5.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snp-sites/2.5.1-GCCcore-10.2.0 - x x - x x"}, {"location": "available_software/detail/snpEff/", "title": "snpEff", "text": ""}, {"location": "available_software/detail/snpEff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which snpEff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using snpEff, load one of these modules using a module load command like:

                  module load snpEff/5.0e-GCCcore-10.2.0-Java-13\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty snpEff/5.0e-GCCcore-10.2.0-Java-13 - x x - x x"}, {"location": "available_software/detail/solo/", "title": "solo", "text": ""}, {"location": "available_software/detail/solo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which solo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using solo, load one of these modules using a module load command like:

                  module load solo/1.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty solo/1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/sonic/", "title": "sonic", "text": ""}, {"location": "available_software/detail/sonic/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sonic installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sonic, load one of these modules using a module load command like:

                  module load sonic/20180202-gompi-2020a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sonic/20180202-gompi-2020a - x x - x x"}, {"location": "available_software/detail/spaCy/", "title": "spaCy", "text": ""}, {"location": "available_software/detail/spaCy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaCy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaCy, load one of these modules using a module load command like:

                  module load spaCy/3.4.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaCy/3.4.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/spaln/", "title": "spaln", "text": ""}, {"location": "available_software/detail/spaln/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spaln installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spaln, load one of these modules using a module load command like:

                  module load spaln/2.4.13f-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spaln/2.4.13f-GCC-11.3.0 x x x x x x spaln/2.4.12-GCC-11.2.0 x x x x x x spaln/2.4.12-GCC-10.2.0 x x x x x x spaln/2.4.03-iccifort-2019.5.281 - x x - x x"}, {"location": "available_software/detail/sparse-neighbors-search/", "title": "sparse-neighbors-search", "text": ""}, {"location": "available_software/detail/sparse-neighbors-search/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparse-neighbors-search installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparse-neighbors-search, load one of these modules using a module load command like:

                  module load sparse-neighbors-search/0.7-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparse-neighbors-search/0.7-foss-2022a x x x x x x"}, {"location": "available_software/detail/sparsehash/", "title": "sparsehash", "text": ""}, {"location": "available_software/detail/sparsehash/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sparsehash installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sparsehash, load one of these modules using a module load command like:

                  module load sparsehash/2.0.4-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sparsehash/2.0.4-GCCcore-12.3.0 x x x x x x sparsehash/2.0.3-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/spatialreg/", "title": "spatialreg", "text": ""}, {"location": "available_software/detail/spatialreg/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spatialreg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spatialreg, load one of these modules using a module load command like:

                  module load spatialreg/1.1-8-foss-2021a-R-4.1.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spatialreg/1.1-8-foss-2021a-R-4.1.0 - x x - x x spatialreg/1.1-5-foss-2019b-R-3.6.2 - x x - x x"}, {"location": "available_software/detail/speech_tools/", "title": "speech_tools", "text": ""}, {"location": "available_software/detail/speech_tools/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which speech_tools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using speech_tools, load one of these modules using a module load command like:

                  module load speech_tools/2.5.0-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty speech_tools/2.5.0-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/spglib-python/", "title": "spglib-python", "text": ""}, {"location": "available_software/detail/spglib-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spglib-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spglib-python, load one of these modules using a module load command like:

                  module load spglib-python/2.0.0-intel-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spglib-python/2.0.0-intel-2022a x x x x x x spglib-python/2.0.0-foss-2022a x x x x x x spglib-python/1.16.3-intel-2021b x x x - x x spglib-python/1.16.3-foss-2021b x x x - x x spglib-python/1.16.1-gomkl-2021a x x x x x x spglib-python/1.16.0-intel-2020a-Python-3.8.2 x x x x x x spglib-python/1.16.0-fosscuda-2020b - - - - x - spglib-python/1.16.0-foss-2020b - x x x x x"}, {"location": "available_software/detail/spoa/", "title": "spoa", "text": ""}, {"location": "available_software/detail/spoa/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which spoa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using spoa, load one of these modules using a module load command like:

                  module load spoa/4.0.7-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty spoa/4.0.7-GCC-11.3.0 x x x x x x spoa/4.0.7-GCC-11.2.0 x x x - x x spoa/4.0.7-GCC-10.3.0 x x x - x x spoa/4.0.7-GCC-10.2.0 - x x x x x spoa/4.0.0-GCC-8.3.0 - x x - x x spoa/3.4.0-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/stardist/", "title": "stardist", "text": ""}, {"location": "available_software/detail/stardist/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stardist installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stardist, load one of these modules using a module load command like:

                  module load stardist/0.8.3-foss-2021b-CUDA-11.4.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stardist/0.8.3-foss-2021b-CUDA-11.4.1 x - - - x - stardist/0.8.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/stars/", "title": "stars", "text": ""}, {"location": "available_software/detail/stars/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which stars installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using stars, load one of these modules using a module load command like:

                  module load stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty stars/0.4-3-foss-2020a-R-4.0.0-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/statsmodels/", "title": "statsmodels", "text": ""}, {"location": "available_software/detail/statsmodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which statsmodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using statsmodels, load one of these modules using a module load command like:

                  module load statsmodels/0.14.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty statsmodels/0.14.1-gfbf-2023a x x x x x x statsmodels/0.14.0-gfbf-2022b x x x x x x statsmodels/0.13.1-intel-2021b x x x - x x statsmodels/0.13.1-foss-2022a x x x x x x statsmodels/0.13.1-foss-2021b x x x x x x statsmodels/0.12.2-foss-2021a x x x x x x statsmodels/0.12.1-intel-2020b - x x - x x statsmodels/0.12.1-fosscuda-2020b - - - - x - statsmodels/0.12.1-foss-2020b - x x x x x statsmodels/0.11.1-intel-2020a-Python-3.8.2 - x x - x x statsmodels/0.11.0-intel-2019b-Python-3.7.4 - x x - x x statsmodels/0.11.0-foss-2019b-Python-3.7.4 - x x - x x statsmodels/0.9.0-intel-2019b-Python-2.7.16 - x - - - x"}, {"location": "available_software/detail/suave/", "title": "suave", "text": ""}, {"location": "available_software/detail/suave/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which suave installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using suave, load one of these modules using a module load command like:

                  module load suave/20160529-foss-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty suave/20160529-foss-2020b - x x x x x"}, {"location": "available_software/detail/supernova/", "title": "supernova", "text": ""}, {"location": "available_software/detail/supernova/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which supernova installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using supernova, load one of these modules using a module load command like:

                  module load supernova/2.0.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty supernova/2.0.1 - - - - - x"}, {"location": "available_software/detail/swissknife/", "title": "swissknife", "text": ""}, {"location": "available_software/detail/swissknife/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which swissknife installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using swissknife, load one of these modules using a module load command like:

                  module load swissknife/1.80-GCCcore-8.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty swissknife/1.80-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/sympy/", "title": "sympy", "text": ""}, {"location": "available_software/detail/sympy/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using sympy, load one of these modules using a module load command like:

                  module load sympy/1.12-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty sympy/1.12-gfbf-2023a x x x x x x sympy/1.12-gfbf-2022b x x x x x x sympy/1.11.1-intel-2022a x x x x x x sympy/1.11.1-foss-2022a x x x - x x sympy/1.10.1-intel-2022a x x x x x x sympy/1.10.1-foss-2022a x x x - x x sympy/1.9-intel-2021b x x x x x x sympy/1.9-foss-2021b x x x - x x sympy/1.7.1-foss-2020b - x x x x x sympy/1.6.2-foss-2020a-Python-3.8.2 - x x - x x sympy/1.5.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/synapseclient/", "title": "synapseclient", "text": ""}, {"location": "available_software/detail/synapseclient/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synapseclient installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synapseclient, load one of these modules using a module load command like:

                  module load synapseclient/3.0.0-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synapseclient/3.0.0-GCCcore-12.2.0 x x x x x x"}, {"location": "available_software/detail/synthcity/", "title": "synthcity", "text": ""}, {"location": "available_software/detail/synthcity/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which synthcity installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using synthcity, load one of these modules using a module load command like:

                  module load synthcity/0.2.4-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty synthcity/0.2.4-foss-2022a x x x x x x"}, {"location": "available_software/detail/tMAE/", "title": "tMAE", "text": ""}, {"location": "available_software/detail/tMAE/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tMAE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tMAE, load one of these modules using a module load command like:

                  module load tMAE/1.0.0-foss-2020b-R-4.0.3\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tMAE/1.0.0-foss-2020b-R-4.0.3 - x x x x x"}, {"location": "available_software/detail/tabixpp/", "title": "tabixpp", "text": ""}, {"location": "available_software/detail/tabixpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tabixpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tabixpp, load one of these modules using a module load command like:

                  module load tabixpp/1.1.2-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tabixpp/1.1.2-GCC-11.3.0 x x x x x x tabixpp/1.1.0-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/task-spooler/", "title": "task-spooler", "text": ""}, {"location": "available_software/detail/task-spooler/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which task-spooler installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using task-spooler, load one of these modules using a module load command like:

                  module load task-spooler/1.0.2-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty task-spooler/1.0.2-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/taxator-tk/", "title": "taxator-tk", "text": ""}, {"location": "available_software/detail/taxator-tk/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which taxator-tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using taxator-tk, load one of these modules using a module load command like:

                  module load taxator-tk/1.3.3-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty taxator-tk/1.3.3-gompi-2020b - x - - - - taxator-tk/1.3.3-GCC-10.2.0 - x x x x x"}, {"location": "available_software/detail/tbb/", "title": "tbb", "text": ""}, {"location": "available_software/detail/tbb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbb, load one of these modules using a module load command like:

                  module load tbb/2021.5.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbb/2021.5.0-GCCcore-11.3.0 x x x x x x tbb/2020.3-GCCcore-11.2.0 x x x x x x tbb/2020.3-GCCcore-10.3.0 - x x - x x tbb/2020.3-GCCcore-10.2.0 - x x x x x tbb/2020.1-GCCcore-9.3.0 - x x - x x tbb/2019_U9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tbl2asn/", "title": "tbl2asn", "text": ""}, {"location": "available_software/detail/tbl2asn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tbl2asn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tbl2asn, load one of these modules using a module load command like:

                  module load tbl2asn/20220427-linux64\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tbl2asn/20220427-linux64 - x x x x x tbl2asn/25.8-linux64 - - - - - x"}, {"location": "available_software/detail/tcsh/", "title": "tcsh", "text": ""}, {"location": "available_software/detail/tcsh/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tcsh, load one of these modules using a module load command like:

                  module load tcsh/6.24.10-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tcsh/6.24.10-GCCcore-12.3.0 x x x x x x tcsh/6.22.04-GCCcore-10.3.0 x - - - x - tcsh/6.22.03-GCCcore-10.2.0 - x x x x x tcsh/6.22.02-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/tensorboard/", "title": "tensorboard", "text": ""}, {"location": "available_software/detail/tensorboard/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboard, load one of these modules using a module load command like:

                  module load tensorboard/2.10.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboard/2.10.0-foss-2022a x x x x x x tensorboard/2.8.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/tensorboardX/", "title": "tensorboardX", "text": ""}, {"location": "available_software/detail/tensorboardX/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorboardX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorboardX, load one of these modules using a module load command like:

                  module load tensorboardX/2.6.2.2-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorboardX/2.6.2.2-foss-2023a x x x x x x tensorboardX/2.6.2.2-foss-2022b x x x x x x tensorboardX/2.5.1-foss-2022a x x x x x x tensorboardX/2.2-fosscuda-2020b-PyTorch-1.7.1 - - - - x - tensorboardX/2.2-foss-2020b-PyTorch-1.7.1 - x x x x x tensorboardX/2.1-fosscuda-2020b-PyTorch-1.7.1 - - - - x -"}, {"location": "available_software/detail/tensorflow-probability/", "title": "tensorflow-probability", "text": ""}, {"location": "available_software/detail/tensorflow-probability/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tensorflow-probability installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tensorflow-probability, load one of these modules using a module load command like:

                  module load tensorflow-probability/0.19.0-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tensorflow-probability/0.19.0-foss-2022a x x x x x x tensorflow-probability/0.14.0-foss-2021a x x x x x x"}, {"location": "available_software/detail/texinfo/", "title": "texinfo", "text": ""}, {"location": "available_software/detail/texinfo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texinfo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texinfo, load one of these modules using a module load command like:

                  module load texinfo/6.7-GCCcore-9.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texinfo/6.7-GCCcore-9.3.0 - x x - x x texinfo/6.7-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/texlive/", "title": "texlive", "text": ""}, {"location": "available_software/detail/texlive/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which texlive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using texlive, load one of these modules using a module load command like:

                  module load texlive/20230313-GCC-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty texlive/20230313-GCC-12.3.0 x x x x x x texlive/20210324-GCC-11.2.0 - x x - x x"}, {"location": "available_software/detail/tidymodels/", "title": "tidymodels", "text": ""}, {"location": "available_software/detail/tidymodels/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tidymodels installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tidymodels, load one of these modules using a module load command like:

                  module load tidymodels/1.1.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tidymodels/1.1.0-foss-2022b x x x x x x"}, {"location": "available_software/detail/time/", "title": "time", "text": ""}, {"location": "available_software/detail/time/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using time, load one of these modules using a module load command like:

                  module load time/1.9-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty time/1.9-GCCcore-10.2.0 - x x x x x time/1.9-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/timm/", "title": "timm", "text": ""}, {"location": "available_software/detail/timm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which timm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using timm, load one of these modules using a module load command like:

                  module load timm/0.9.2-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty timm/0.9.2-foss-2022a-CUDA-11.7.0 x - - - x - timm/0.6.13-foss-2022a-CUDA-11.7.0 x - - - x -"}, {"location": "available_software/detail/tmux/", "title": "tmux", "text": ""}, {"location": "available_software/detail/tmux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tmux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tmux, load one of these modules using a module load command like:

                  module load tmux/3.2a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tmux/3.2a - x x - x x"}, {"location": "available_software/detail/tokenizers/", "title": "tokenizers", "text": ""}, {"location": "available_software/detail/tokenizers/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tokenizers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tokenizers, load one of these modules using a module load command like:

                  module load tokenizers/0.13.3-GCCcore-12.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tokenizers/0.13.3-GCCcore-12.2.0 x x x x x x tokenizers/0.12.1-GCCcore-10.3.0 x x x - x x"}, {"location": "available_software/detail/torchaudio/", "title": "torchaudio", "text": ""}, {"location": "available_software/detail/torchaudio/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchaudio installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchaudio, load one of these modules using a module load command like:

                  module load torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0-CUDA-11.7.0 x - x - x - torchaudio/0.12.0-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchtext/", "title": "torchtext", "text": ""}, {"location": "available_software/detail/torchtext/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchtext installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchtext, load one of these modules using a module load command like:

                  module load torchtext/0.14.1-foss-2022a-PyTorch-1.12.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchtext/0.14.1-foss-2022a-PyTorch-1.12.0 x x x x x x"}, {"location": "available_software/detail/torchvf/", "title": "torchvf", "text": ""}, {"location": "available_software/detail/torchvf/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvf, load one of these modules using a module load command like:

                  module load torchvf/0.1.3-foss-2022a-CUDA-11.7.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvf/0.1.3-foss-2022a-CUDA-11.7.0 x - - - x - torchvf/0.1.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/torchvision/", "title": "torchvision", "text": ""}, {"location": "available_software/detail/torchvision/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which torchvision installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using torchvision, load one of these modules using a module load command like:

                  module load torchvision/0.14.1-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty torchvision/0.14.1-foss-2022b x x x x x x torchvision/0.13.1-foss-2022a-CUDA-11.7.0 x - x - x - torchvision/0.13.1-foss-2022a x x x x x x torchvision/0.11.3-foss-2021a - x x - x x torchvision/0.11.1-foss-2021a-CUDA-11.3.1 x - - - x - torchvision/0.11.1-foss-2021a - x x - x x torchvision/0.8.2-fosscuda-2020b-PyTorch-1.7.1 x - - - x - torchvision/0.8.2-foss-2020b-PyTorch-1.7.1 - x x x x x torchvision/0.7.0-foss-2019b-Python-3.7.4-PyTorch-1.6.0 - - x - x x"}, {"location": "available_software/detail/tornado/", "title": "tornado", "text": ""}, {"location": "available_software/detail/tornado/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tornado, load one of these modules using a module load command like:

                  module load tornado/6.3.2-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tornado/6.3.2-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/tqdm/", "title": "tqdm", "text": ""}, {"location": "available_software/detail/tqdm/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tqdm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tqdm, load one of these modules using a module load command like:

                  module load tqdm/4.66.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tqdm/4.66.1-GCCcore-12.3.0 x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x tqdm/4.64.0-GCCcore-11.3.0 x x x x x x tqdm/4.62.3-GCCcore-11.2.0 x x x x x x tqdm/4.61.2-GCCcore-10.3.0 x x x x x x tqdm/4.60.0-GCCcore-10.2.0 - x x - x x tqdm/4.56.2-GCCcore-10.2.0 x x x x x x tqdm/4.47.0-GCCcore-9.3.0 x x x x x x tqdm/4.41.1-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/treatSens/", "title": "treatSens", "text": ""}, {"location": "available_software/detail/treatSens/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which treatSens installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using treatSens, load one of these modules using a module load command like:

                  module load treatSens/3.0-20201002-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty treatSens/3.0-20201002-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/trimAl/", "title": "trimAl", "text": ""}, {"location": "available_software/detail/trimAl/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which trimAl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using trimAl, load one of these modules using a module load command like:

                  module load trimAl/1.4.1-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty trimAl/1.4.1-GCCcore-12.3.0 x x x x x x trimAl/1.4.1-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/tsne/", "title": "tsne", "text": ""}, {"location": "available_software/detail/tsne/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which tsne installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using tsne, load one of these modules using a module load command like:

                  module load tsne/0.1.8-intel-2019b-Python-2.7.16\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty tsne/0.1.8-intel-2019b-Python-2.7.16 - x x - x x"}, {"location": "available_software/detail/typing-extensions/", "title": "typing-extensions", "text": ""}, {"location": "available_software/detail/typing-extensions/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using typing-extensions, load one of these modules using a module load command like:

                  module load typing-extensions/4.9.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.9.0-GCCcore-12.2.0 x x x x x x typing-extensions/4.8.0-GCCcore-12.3.0 x x x x x x typing-extensions/4.3.0-GCCcore-11.3.0 x x x x x x typing-extensions/3.10.0.2-GCCcore-11.2.0 x x x x x x typing-extensions/3.10.0.0-GCCcore-10.3.0 x x x x x x typing-extensions/3.7.4.3-GCCcore-10.2.0 x x x x x x"}, {"location": "available_software/detail/umap-learn/", "title": "umap-learn", "text": ""}, {"location": "available_software/detail/umap-learn/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umap-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umap-learn, load one of these modules using a module load command like:

                  module load umap-learn/0.5.5-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umap-learn/0.5.5-foss-2023a x x x x x x umap-learn/0.5.3-foss-2022a x x x x x x umap-learn/0.5.3-foss-2021a x x x x x x umap-learn/0.4.6-fosscuda-2020b - - - - x -"}, {"location": "available_software/detail/umi4cPackage/", "title": "umi4cPackage", "text": ""}, {"location": "available_software/detail/umi4cPackage/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which umi4cPackage installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using umi4cPackage, load one of these modules using a module load command like:

                  module load umi4cPackage/20200116-foss-2020a-R-4.0.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty umi4cPackage/20200116-foss-2020a-R-4.0.0 - x x - x x"}, {"location": "available_software/detail/uncertainties/", "title": "uncertainties", "text": ""}, {"location": "available_software/detail/uncertainties/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainties installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainties, load one of these modules using a module load command like:

                  module load uncertainties/3.1.7-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainties/3.1.7-foss-2021b x x x x x x"}, {"location": "available_software/detail/uncertainty-calibration/", "title": "uncertainty-calibration", "text": ""}, {"location": "available_software/detail/uncertainty-calibration/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which uncertainty-calibration installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using uncertainty-calibration, load one of these modules using a module load command like:

                  module load uncertainty-calibration/0.0.9-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty uncertainty-calibration/0.0.9-foss-2021b x x x - x x"}, {"location": "available_software/detail/unimap/", "title": "unimap", "text": ""}, {"location": "available_software/detail/unimap/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unimap installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unimap, load one of these modules using a module load command like:

                  module load unimap/0.1-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unimap/0.1-GCCcore-10.2.0 - x x x x x"}, {"location": "available_software/detail/unixODBC/", "title": "unixODBC", "text": ""}, {"location": "available_software/detail/unixODBC/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which unixODBC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using unixODBC, load one of these modules using a module load command like:

                  module load unixODBC/2.3.11-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty unixODBC/2.3.11-foss-2022b x x x x x x"}, {"location": "available_software/detail/utf8proc/", "title": "utf8proc", "text": ""}, {"location": "available_software/detail/utf8proc/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which utf8proc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using utf8proc, load one of these modules using a module load command like:

                  module load utf8proc/2.8.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x utf8proc/2.7.0-GCCcore-11.3.0 x x x x x x utf8proc/2.6.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/util-linux/", "title": "util-linux", "text": ""}, {"location": "available_software/detail/util-linux/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which util-linux installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using util-linux, load one of these modules using a module load command like:

                  module load util-linux/2.39-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty util-linux/2.39-GCCcore-12.3.0 x x x x x x util-linux/2.38.1-GCCcore-12.2.0 x x x x x x util-linux/2.38-GCCcore-11.3.0 x x x x x x util-linux/2.37-GCCcore-11.2.0 x x x x x x util-linux/2.36-GCCcore-10.3.0 x x x x x x util-linux/2.36-GCCcore-10.2.0 x x x x x x util-linux/2.35-GCCcore-9.3.0 x x x x x x util-linux/2.34-GCCcore-8.3.0 x x x - x x util-linux/2.33-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/vConTACT2/", "title": "vConTACT2", "text": ""}, {"location": "available_software/detail/vConTACT2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vConTACT2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vConTACT2, load one of these modules using a module load command like:

                  module load vConTACT2/0.11.3-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vConTACT2/0.11.3-foss-2022a x x x x x x"}, {"location": "available_software/detail/vaeda/", "title": "vaeda", "text": ""}, {"location": "available_software/detail/vaeda/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vaeda installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vaeda, load one of these modules using a module load command like:

                  module load vaeda/0.0.30-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vaeda/0.0.30-foss-2022a x x x x x x"}, {"location": "available_software/detail/vbz_compression/", "title": "vbz_compression", "text": ""}, {"location": "available_software/detail/vbz_compression/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vbz_compression installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vbz_compression, load one of these modules using a module load command like:

                  module load vbz_compression/1.0.1-gompi-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vbz_compression/1.0.1-gompi-2020b - x - - - -"}, {"location": "available_software/detail/vcflib/", "title": "vcflib", "text": ""}, {"location": "available_software/detail/vcflib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vcflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vcflib, load one of these modules using a module load command like:

                  module load vcflib/1.0.9-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vcflib/1.0.9-foss-2022a-R-4.2.1 x x x x x x vcflib/1.0.2-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/velocyto/", "title": "velocyto", "text": ""}, {"location": "available_software/detail/velocyto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which velocyto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using velocyto, load one of these modules using a module load command like:

                  module load velocyto/0.17.17-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty velocyto/0.17.17-intel-2020a-Python-3.8.2 - x x - x x velocyto/0.17.17-foss-2022a x x x x x x"}, {"location": "available_software/detail/virtualenv/", "title": "virtualenv", "text": ""}, {"location": "available_software/detail/virtualenv/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using virtualenv, load one of these modules using a module load command like:

                  module load virtualenv/20.24.6-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x"}, {"location": "available_software/detail/vispr/", "title": "vispr", "text": ""}, {"location": "available_software/detail/vispr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vispr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vispr, load one of these modules using a module load command like:

                  module load vispr/0.4.14-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vispr/0.4.14-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessce-python/", "title": "vitessce-python", "text": ""}, {"location": "available_software/detail/vitessce-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessce-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessce-python, load one of these modules using a module load command like:

                  module load vitessce-python/20230222-foss-2022a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessce-python/20230222-foss-2022a x x x x x x"}, {"location": "available_software/detail/vitessceR/", "title": "vitessceR", "text": ""}, {"location": "available_software/detail/vitessceR/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vitessceR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vitessceR, load one of these modules using a module load command like:

                  module load vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vitessceR/0.99.0-20230110-foss-2022a-R-4.2.1 x x x x x x"}, {"location": "available_software/detail/vsc-mympirun/", "title": "vsc-mympirun", "text": ""}, {"location": "available_software/detail/vsc-mympirun/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vsc-mympirun installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vsc-mympirun, load one of these modules using a module load command like:

                  module load vsc-mympirun/5.3.1\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vsc-mympirun/5.3.1 x x x x x x vsc-mympirun/5.3.0 x x x x x x vsc-mympirun/5.2.11 x x x x x x vsc-mympirun/5.2.10 x x x - x x vsc-mympirun/5.2.9 x x x - x x vsc-mympirun/5.2.7 x x x - x x vsc-mympirun/5.2.6 x x x - x x vsc-mympirun/5.2.5 - x - - - - vsc-mympirun/5.2.4 - x - - - - vsc-mympirun/5.2.3 - x - - - - vsc-mympirun/5.2.2 - x - - - - vsc-mympirun/5.2.0 - x - - - - vsc-mympirun/5.1.0 - x - - - -"}, {"location": "available_software/detail/vt/", "title": "vt", "text": ""}, {"location": "available_software/detail/vt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which vt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using vt, load one of these modules using a module load command like:

                  module load vt/0.57721-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty vt/0.57721-GCC-10.2.0 - x x - x -"}, {"location": "available_software/detail/wandb/", "title": "wandb", "text": ""}, {"location": "available_software/detail/wandb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wandb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wandb, load one of these modules using a module load command like:

                  module load wandb/0.13.6-GCC-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wandb/0.13.6-GCC-11.3.0 x x x - x x wandb/0.13.4-GCCcore-11.3.0 - - x - x -"}, {"location": "available_software/detail/waves2Foam/", "title": "waves2Foam", "text": ""}, {"location": "available_software/detail/waves2Foam/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which waves2Foam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using waves2Foam, load one of these modules using a module load command like:

                  module load waves2Foam/20200703-foss-2019b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty waves2Foam/20200703-foss-2019b - x x - x x"}, {"location": "available_software/detail/wget/", "title": "wget", "text": ""}, {"location": "available_software/detail/wget/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wget, load one of these modules using a module load command like:

                  module load wget/1.21.1-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wget/1.21.1-GCCcore-10.3.0 - x x x x x wget/1.20.3-GCCcore-10.2.0 x x x x x x wget/1.20.3-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/wgsim/", "title": "wgsim", "text": ""}, {"location": "available_software/detail/wgsim/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wgsim installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wgsim, load one of these modules using a module load command like:

                  module load wgsim/20111017-GCC-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wgsim/20111017-GCC-10.2.0 - x x - x x"}, {"location": "available_software/detail/worker/", "title": "worker", "text": ""}, {"location": "available_software/detail/worker/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which worker installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using worker, load one of these modules using a module load command like:

                  module load worker/1.6.13-iimpi-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty worker/1.6.13-iimpi-2022b x x x x x x worker/1.6.13-iimpi-2021b x x x - x x worker/1.6.12-foss-2021b x x x - x x worker/1.6.11-intel-2019b - x x - x x"}, {"location": "available_software/detail/wpebackend-fdo/", "title": "wpebackend-fdo", "text": ""}, {"location": "available_software/detail/wpebackend-fdo/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wpebackend-fdo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wpebackend-fdo, load one of these modules using a module load command like:

                  module load wpebackend-fdo/1.13.1-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wpebackend-fdo/1.13.1-GCCcore-11.2.0 x x x x x x"}, {"location": "available_software/detail/wrapt/", "title": "wrapt", "text": ""}, {"location": "available_software/detail/wrapt/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrapt, load one of these modules using a module load command like:

                  module load wrapt/1.15.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrapt/1.15.0-gfbf-2023a x x x x x x wrapt/1.15.0-foss-2022b x x x x x x wrapt/1.15.0-foss-2022a x x x x x x"}, {"location": "available_software/detail/wrf-python/", "title": "wrf-python", "text": ""}, {"location": "available_software/detail/wrf-python/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wrf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wrf-python, load one of these modules using a module load command like:

                  module load wrf-python/1.3.4.1-foss-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wrf-python/1.3.4.1-foss-2023a x x x x x x"}, {"location": "available_software/detail/wtdbg2/", "title": "wtdbg2", "text": ""}, {"location": "available_software/detail/wtdbg2/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wtdbg2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wtdbg2, load one of these modules using a module load command like:

                  module load wtdbg2/2.5-GCCcore-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wtdbg2/2.5-GCCcore-11.2.0 x x x - x x"}, {"location": "available_software/detail/wxPython/", "title": "wxPython", "text": ""}, {"location": "available_software/detail/wxPython/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxPython, load one of these modules using a module load command like:

                  module load wxPython/4.2.0-foss-2021b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxPython/4.2.0-foss-2021b x x x x x x wxPython/4.1.1-foss-2021a x x x - x x"}, {"location": "available_software/detail/wxWidgets/", "title": "wxWidgets", "text": ""}, {"location": "available_software/detail/wxWidgets/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which wxWidgets installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using wxWidgets, load one of these modules using a module load command like:

                  module load wxWidgets/3.2.0-GCC-11.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty wxWidgets/3.2.0-GCC-11.2.0 x x x x x x"}, {"location": "available_software/detail/x264/", "title": "x264", "text": ""}, {"location": "available_software/detail/x264/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x264, load one of these modules using a module load command like:

                  module load x264/20230226-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x264/20230226-GCCcore-12.3.0 x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x x264/20220620-GCCcore-11.3.0 x x x x x x x264/20210613-GCCcore-11.2.0 x x x x x x x264/20210414-GCCcore-10.3.0 x x x x x x x264/20201026-GCCcore-10.2.0 x x x x x x x264/20191217-GCCcore-9.3.0 - x x - x x x264/20190925-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/x265/", "title": "x265", "text": ""}, {"location": "available_software/detail/x265/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using x265, load one of these modules using a module load command like:

                  module load x265/3.5-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty x265/3.5-GCCcore-12.3.0 x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x x265/3.5-GCCcore-11.3.0 x x x x x x x265/3.5-GCCcore-11.2.0 x x x x x x x265/3.5-GCCcore-10.3.0 x x x x x x x265/3.3-GCCcore-10.2.0 x x x x x x x265/3.3-GCCcore-9.3.0 - x x - x x x265/3.2-GCCcore-8.3.0 x x x - x x"}, {"location": "available_software/detail/xESMF/", "title": "xESMF", "text": ""}, {"location": "available_software/detail/xESMF/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xESMF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xESMF, load one of these modules using a module load command like:

                  module load xESMF/0.3.0-intel-2020b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xESMF/0.3.0-intel-2020b - x x - x x xESMF/0.3.0-foss-2020a-Python-3.8.2 - x x - x x"}, {"location": "available_software/detail/xarray/", "title": "xarray", "text": ""}, {"location": "available_software/detail/xarray/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xarray installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xarray, load one of these modules using a module load command like:

                  module load xarray/2023.9.0-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xarray/2023.9.0-gfbf-2023a x x x x x x xarray/2023.4.2-gfbf-2022b x x x x x x xarray/2022.6.0-foss-2022a x x x x x x xarray/0.20.1-intel-2021b x x x - x x xarray/0.20.1-foss-2021b x x x x x x xarray/0.19.0-foss-2021a x x x x x x xarray/0.16.2-intel-2020b - x x - x x xarray/0.16.2-fosscuda-2020b - - - - x - xarray/0.16.1-foss-2020a-Python-3.8.2 - x x - x x xarray/0.15.1-intel-2019b-Python-3.7.4 - x x - x x xarray/0.15.1-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/xorg-macros/", "title": "xorg-macros", "text": ""}, {"location": "available_software/detail/xorg-macros/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xorg-macros, load one of these modules using a module load command like:

                  module load xorg-macros/1.20.0-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.3.0 x x x x x x xorg-macros/1.19.3-GCCcore-11.2.0 x x x x x x xorg-macros/1.19.3-GCCcore-10.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-10.2.0 x x x x x x xorg-macros/1.19.2-GCCcore-9.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.3.0 x x x x x x xorg-macros/1.19.2-GCCcore-8.2.0 - x - - - -"}, {"location": "available_software/detail/xprop/", "title": "xprop", "text": ""}, {"location": "available_software/detail/xprop/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xprop installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xprop, load one of these modules using a module load command like:

                  module load xprop/1.2.5-GCCcore-10.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xprop/1.2.5-GCCcore-10.2.0 - x x x x x xprop/1.2.4-GCCcore-9.3.0 - x x - x x"}, {"location": "available_software/detail/xproto/", "title": "xproto", "text": ""}, {"location": "available_software/detail/xproto/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xproto installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xproto, load one of these modules using a module load command like:

                  module load xproto/7.0.31-GCCcore-10.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xproto/7.0.31-GCCcore-10.3.0 - x x - x x xproto/7.0.31-GCCcore-8.3.0 - x x - x x"}, {"location": "available_software/detail/xtb/", "title": "xtb", "text": ""}, {"location": "available_software/detail/xtb/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xtb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xtb, load one of these modules using a module load command like:

                  module load xtb/6.6.1-gfbf-2023a\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xtb/6.6.1-gfbf-2023a x x x x x x"}, {"location": "available_software/detail/xxd/", "title": "xxd", "text": ""}, {"location": "available_software/detail/xxd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using xxd, load one of these modules using a module load command like:

                  module load xxd/9.0.2112-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty xxd/9.0.2112-GCCcore-12.3.0 x x x x x x xxd/9.0.1696-GCCcore-12.2.0 x x x x x x xxd/8.2.4220-GCCcore-11.3.0 x x x x x x xxd/8.2.4220-GCCcore-11.2.0 x x x - x x xxd/8.2.4220-GCCcore-10.3.0 - - - x - - xxd/8.2.4220-GCCcore-10.2.0 - - - x - -"}, {"location": "available_software/detail/yaff/", "title": "yaff", "text": ""}, {"location": "available_software/detail/yaff/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaff, load one of these modules using a module load command like:

                  module load yaff/1.6.0-intel-2020a-Python-3.8.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaff/1.6.0-intel-2020a-Python-3.8.2 x x x x x x yaff/1.6.0-intel-2019b-Python-3.7.4 - x x - x x yaff/1.6.0-foss-2019b-Python-3.7.4 - x x - x x"}, {"location": "available_software/detail/yaml-cpp/", "title": "yaml-cpp", "text": ""}, {"location": "available_software/detail/yaml-cpp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which yaml-cpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using yaml-cpp, load one of these modules using a module load command like:

                  module load yaml-cpp/0.7.0-GCCcore-12.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty yaml-cpp/0.7.0-GCCcore-12.3.0 x x x x x x yaml-cpp/0.7.0-GCCcore-11.2.0 x x x - x x yaml-cpp/0.6.3-GCCcore-8.3.0 - - x - x x"}, {"location": "available_software/detail/zUMIs/", "title": "zUMIs", "text": ""}, {"location": "available_software/detail/zUMIs/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zUMIs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zUMIs, load one of these modules using a module load command like:

                  module load zUMIs/2.9.7-foss-2023a-R-4.3.2\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zUMIs/2.9.7-foss-2023a-R-4.3.2 x x x x x x"}, {"location": "available_software/detail/zarr/", "title": "zarr", "text": ""}, {"location": "available_software/detail/zarr/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zarr installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zarr, load one of these modules using a module load command like:

                  module load zarr/2.16.0-foss-2022b\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zarr/2.16.0-foss-2022b x x x x x x zarr/2.13.3-foss-2022a x x x x x x zarr/2.13.3-foss-2021b x x x x x x"}, {"location": "available_software/detail/zfp/", "title": "zfp", "text": ""}, {"location": "available_software/detail/zfp/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zfp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zfp, load one of these modules using a module load command like:

                  module load zfp/1.0.0-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zfp/1.0.0-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib-ng/", "title": "zlib-ng", "text": ""}, {"location": "available_software/detail/zlib-ng/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib-ng, load one of these modules using a module load command like:

                  module load zlib-ng/2.0.7-GCCcore-11.3.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib-ng/2.0.7-GCCcore-11.3.0 x x x x x x"}, {"location": "available_software/detail/zlib/", "title": "zlib", "text": ""}, {"location": "available_software/detail/zlib/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zlib, load one of these modules using a module load command like:

                  module load zlib/1.2.13-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zlib/1.2.13-GCCcore-13.2.0 x x x x x x zlib/1.2.13-GCCcore-12.3.0 x x x x x x zlib/1.2.13 x x x x x x zlib/1.2.12-GCCcore-12.2.0 x x x x x x zlib/1.2.12-GCCcore-11.3.0 x x x x x x zlib/1.2.12 x x x x x x zlib/1.2.11-GCCcore-11.2.0 x x x x x x zlib/1.2.11-GCCcore-10.3.0 x x x x x x zlib/1.2.11-GCCcore-10.2.0 x x x x x x zlib/1.2.11-GCCcore-9.3.0 x x x x x x zlib/1.2.11-GCCcore-8.3.0 x x x x x x zlib/1.2.11-GCCcore-8.2.0 - x - - - - zlib/1.2.11 x x x x x x"}, {"location": "available_software/detail/zstd/", "title": "zstd", "text": ""}, {"location": "available_software/detail/zstd/#available-modules", "title": "Available modules", "text": "

                  The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).

                  To start using zstd, load one of these modules using a module load command like:

                  module load zstd/1.5.5-GCCcore-13.2.0\n

                  (This data was automatically generated on Wed, 06 Mar 2024 at 15:51:26 CET)

                  accelgor doduo donphan gallade joltik skitty zstd/1.5.5-GCCcore-13.2.0 x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x zstd/1.5.2-GCCcore-11.3.0 x x x x x x zstd/1.5.0-GCCcore-11.2.0 x x x x x x zstd/1.4.9-GCCcore-10.3.0 x x x x x x zstd/1.4.5-GCCcore-10.2.0 x x x x x x zstd/1.4.4-GCCcore-9.3.0 - x x x x x zstd/1.4.4-GCCcore-8.3.0 x - - - x -"}, {"location": "sites/available_modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "sites/hpc_policies/", "title": "HPC Policies", "text": "

                  Everyone can get access and use the HPC-UGent supercomputing infrastructure and services. The conditions that apply depend on your affiliation.

                  "}, {"location": "sites/hpc_policies/#access-for-staff-and-academics", "title": "Access for staff and academics", "text": ""}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-flemish-university-associations", "title": "Researchers and staff affiliated with Flemish university associations", "text": "
                  • Includes externally funded researchers registered in the personnel database (FWO, SBO, VIB, IMEC, etc.).

                  • Includes researchers from all VSC partners.

                  • Usage is free of charge.

                  • Use your account credentials at your affiliated university to request a VSC-id and connect.

                  • See Getting a HPC Account.

                  "}, {"location": "sites/hpc_policies/#researchers-and-staff-affiliated-with-other-flemish-or-federal-research-institutes", "title": "Researchers and staff affiliated with other Flemish or federal research institutes", "text": "
                  • Includes researchers from e.g. INBO, ILVO, RBINS, etc.

                  • HPC-UGent promotes using the Tier1 services of the VSC.

                  • HPC-UGent can act as a liason.

                  "}, {"location": "sites/hpc_policies/#students", "title": "Students", "text": "
                  • Students can also use HPC-UGent (Bachelor or Master, enrolled in an institute mentioned above).

                  • Same conditions apply, free of charge for all Flemish university associations.

                  • Use your university account credentials to request a VSC-id and connect.

                  "}, {"location": "sites/hpc_policies/#access-for-industry", "title": "Access for industry", "text": "

                  Researchers and developers from industry can use the services and infrastructure tailored to industry from VSC.

                  "}, {"location": "sites/hpc_policies/#our-offer", "title": "Our offer", "text": "
                  • VSC has a dedicated service geared towards industry.

                  • HPC-UGent can act as a liason to the VSC services.

                  "}, {"location": "sites/hpc_policies/#research-partnership", "title": "Research partnership:", "text": "
                  • Interested in collaborating in supercomputing with a UGent research group?

                  • We can help you look for a collaborative partner. Contact hpc@ugent.be.

                  "}, {"location": "sites/antwerpen/available-modules/", "title": "Available modules", "text": "
                  $ module av 2>&1 | more\n------------- /apps/antwerpen/modules/hopper/2015a/all ------------\nABINIT/7.10.2-intel-2015a\nADF/2014.05\nAdvisor/2015_update1\nBison/3.0.4-intel-2015a\nBoost/1.57.0-foss-2015a-Python-2.7.9\nBoost/1.57.0-intel-2015a-Python-2.7.9\nbzip2/1.0.6-foss-2015a\nbzip2/1.0.6-intel-2015a\n...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., LAMMPS) is installed on the HPC.

                  $ module av 2>&1 | grep -i -e \"LAMMPS\"\nLAMMPS/9Dec14-intel-2015a\nLAMMPS/30Oct14-intel-2014a\nLAMMPS/5Sep14-intel-2014a\n

                  As you are not aware of the capitals letters in the module name, we looked for a case-insensitive name with the \"-i\" option.

                  "}, {"location": "sites/gent/available-modules/", "title": "Available modules", "text": "
                  module avail\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   ABAQUS/2021-hotfix-2132\n   ABAQUS/2022-hotfix-2214\n   ABAQUS/2022\n   ABAQUS/2023\n   ABAQUS/2024-hotfix-2405                                                (D)\n   ...\n

                  Or when you want to check whether some specific software, some compiler or some application (e.g., MATLAB) is installed on the HPC.

                  module avail matlab\n--- /apps/gent/RHEL8/zen2-ib/modules/all ---\n   LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5\n   MATLAB/2019b\n   MATLAB/2021b\n   MATLAB/2022b-r5                                   (D)\n   SPM/12.5_r7771-MATLAB-2021b\n
                  "}, {"location": "web_portal_custom_apps/abaqus_cae/", "title": "Custom web portal app for ABAQUS (CAE course)", "text": "

                  (more info soon)

                  "}]} \ No newline at end of file diff --git a/HPC/Gent/macOS/sitemap.xml.gz b/HPC/Gent/macOS/sitemap.xml.gz index b48aee48a9a..18ed1384fbf 100644 Binary files a/HPC/Gent/macOS/sitemap.xml.gz and b/HPC/Gent/macOS/sitemap.xml.gz differ diff --git a/HPC/Gent/macOS/useful_linux_commands/index.html b/HPC/Gent/macOS/useful_linux_commands/index.html index 1f8de4f90f7..a392c466d6b 100644 --- a/HPC/Gent/macOS/useful_linux_commands/index.html +++ b/HPC/Gent/macOS/useful_linux_commands/index.html @@ -1496,7 +1496,7 @@

                  How to get started with shell scr
                  nano foo
                   

                  or use the following commands:

                  -
                  echo "echo Hello! This is my hostname:" > foo
                  +
                  echo "echo 'Hello! This is my hostname:'" > foo
                   echo hostname >> foo
                   

                  The easiest ways to run a script is by starting the interpreter and pass @@ -1521,7 +1521,9 @@

                  How to get started with shell scr /bin/bash

                  We edit our script and change it with this information:

                  -
                  #!/bin/bash echo \"Hello! This is my hostname:\" hostname
                  +
                  #!/bin/bash
                  +echo "Hello! This is my hostname:"
                  +hostname
                   

                  Note that the "shebang" must be the first line of your script! Now the operating system knows which program should be started to run the diff --git a/HPC/Gent/sitemap.xml.gz b/HPC/Gent/sitemap.xml.gz index 7432cf94695..8be5ec9e15c 100644 Binary files a/HPC/Gent/sitemap.xml.gz and b/HPC/Gent/sitemap.xml.gz differ diff --git a/linux-tutorial/sitemap.xml.gz b/linux-tutorial/sitemap.xml.gz index 8c6d6bf8d5e..df9b01d042e 100644 Binary files a/linux-tutorial/sitemap.xml.gz and b/linux-tutorial/sitemap.xml.gz differ diff --git a/pdf/sitemap.xml.gz b/pdf/sitemap.xml.gz index 20129ac34da..27a8e8e3b6f 100644 Binary files a/pdf/sitemap.xml.gz and b/pdf/sitemap.xml.gz differ